IPR Blog

Expert analysis, debates and comments on topical policy-relevant issues

Expecting the unexpected: what resilience should mean to policymakers

📥  Energy and environmental policy, Evidence and policymaking, Housing

Dr Kemi Adeyeye is Senior Lecturer in Architecture in the University of Bath's Department of Architecture and Civil Engineering. This post draws on material first presented in a recent published paper.

Evidence, and perhaps the experience of seemingly perpetual rain on one’s face, suggests that the weather is one thing that is increasingly variable and difficult to predict. The impact of this goes beyond deciding whether to take an umbrella, or wear an extra layer of clothing, when you go out in the morning. Like other shocks, temperamental weather can and does affect various aspects of economic, environmental and social life. In an ideal world, both policy and the built environment would be developed with a level of inbuilt resilience (that is, the capacity to cope with and absorb shocks), a recognition of the need to adapt, change and reorganise, and measures to mitigate the impact of future shocks.

flooding

 

Indeed, most human and physical systems are designed to cope with ‘extremes’ – but often within the range of what is ‘expected’. ‘Unprecedented’ is now a common term used by politicians, the media and some experts to describe current weather events that are extreme, but not within the expected range of extremity. One unprecedented event soon supersedes the next, however, and the next one after that – so to what extent are these events really unprecedented? And to what extent can the impact and consequence of weather events such as flooding be considered a surprise? For scientific answers to these questions, I encourage the reader to review the work of my colleague Dr Thomas Kjeldsen. In this piece, however, I will spend some time considering the concept of anticipation, before concluding with what resilience should really mean to urban planners and policymakers.

Anticipating change

Studies show that, as human beings, we are ontologically programmed to engage in ideations that allow the anticipation of space, time, causality and subjective probability. This is referred to as our evolutionary potential[1] – i.e. our ability to promote preparedness and maximise the probability of proactive change through historical memory, knowledge, expertise and experience. Anticipation is innately formed through memory and experience rather than the unknown. To this end, we are prone to engage in mental time travel, reliving past experiences as the basis for imagining the future. However, we should also be aware of the fact that experiences are carried forward in time through memory (individual or collective), which means that such practices can affect welfare. That is, the effectiveness of memory and/or experience to engender actions and preparedness for resilience can vary depending on how we remember, with a consequent impact on the actual outcomes of shocks. The problem with relying too much on memory is that we soon forget – another useful evolutionary skill to help to cope with trauma.

Anticipation can be both forward- and backward-looking. Using the term ‘unprecedented’ suggests that the extent of our anticipation remains backward-looking, and this supports the prevalent reactionary approach to resilience – whereby capacity is only expanded after it has been overwhelmed by an extreme event. But we need both; forward-looking anticipation, particularly in the context of climate change, needs to be underpinned by past learning. Now, I am sure that scenario planning is taking place across the policy realms at present, building on our current tools and codes to explain and take action when the unexpected event happens. However, this approach does not always translate into dynamic planning for potential future uncertainties – when a comprehensive, flexible response may be required for the next unprecedented scenario.

Rising above the flood

Take flooding. There are some good social and economic reasons for current and future developments on or near water. There is also little choice in some instances. For example, most of the Netherlands lie several meters below sea level. As mentioned later, their planning and building developments have therefore advanced to effectively manage the associated risks. For others, flooding can be cyclical, but also sudden. This introduces general and specific issues to the equation to do with quality of life; economic, environmental and social vulnerability; security; physical, urban and building resilience; and so on.

These are factors that should not be ignored. The OECD forecasts that without effective change, the total global population exposed to flooding could triple to around 150 million by the 2070s due to continuous sea-level rise and increased storminess, subsidence, population growth and urbanisation. Further, asset exposure could grow dramatically, reaching US $35 trillion in the same period – roughly 9% of projected annual GDP. The NHS budget for instance is at present around 7% of UK GDP. Unlike the NHS, however, inaction on resilience is a bill that is best avoided. Exposure to risks does not necessarily translate into impact when resilience is “designed in” through coping and adaptive mechanisms.

So how can we design systems that are resilient and able to contend with unpredictable challenges, such as environmental change? Staying with the theme of flooding, we can learn from approaches that have worked at other times and in other places to better anticipate the future. We can learn not to be so set in our ways, but to dare to be flexible and embrace new ways of working. This is particularly important in the UK context, where our planning rules are entrenched in tradition and our design and building practices can be slow to evolve. Although innovative practices have started in some areas, changes remain piecemeal, and inconsistently applied across the country. Unlike global exemplars of building codes and standards, resilience requirements are still not explicit in the UK Building Regulations – so we are therefore missing out on a more consistent, widespread implementation, in addition to losing the opportunity to promote resilience alongside current sustainability standards, especially in housing developments.

Facing the future

Better integration of good governance, planning, infrastructure and architectural design would be a good first step towards closing the gap between where we are today and our future potential. On governance, there need to be visionary, non-ambiguous and tangible planning policies and regulatory requirements for resilience – particularly in the built environment. Formal building and planning policies, as they stand, could do more to promote forward-looking design and planning solutions, or to facilitate the development of resilience and adaptive capacity against natural events.

But new laws and regulations will not be enough. More should also be done to better equip individuals and communities for the task of planning and acting in their own best interests, or even actively participating in or influencing policy processes. It should also be possible to improve individual and collective anticipation by the positive utilisation of experiences of and effective responses to past climatic extremes – “memory”. Actions taken to improve agency by making better use of wider communication networks to provide access to information, raise awareness and improve action for resilience would also be a positive step.

Building resilience

Examples as old as the Indus Civilisation[3] and as contemporary as the Waterwijk in Ypenburg show that good governance and social measures are not enough on their own. Effective planning, good infrastructure and innovative architecture should be combined to reduce physical and social vulnerabilities. This underpins the argument for an integrated design approach to resilience (Figure 1).

Figure 1. Combined: Integrated resilience map showing applicability and impact [Read more]. The chart (after: Roberts 2013 ) presents combined case study findings along two axes, in four quadrants. The x-axis shows the contributions of important stakeholders including governance representatives; professionals such as architects, engineers and planners; and the people. The y-axis shows the physical outputs through planning, building and infrastructure solutions. The content of the map presents the physical and social solutions, highlighting impact (the size of the circles), and the range, based on the 6 applicability measures presented in the conceptual framework. In many instances the applicability measures overlap, and the map therefore shows the most relevant measure for the particular case.

Figure 1. Combined integrated resilience map showing applicability and impact
The chart (after: Roberts 2013 ) presents combined case study findings along two axes, in four quadrants. The x-axis shows the contributions of important stakeholders including governance representatives; professionals such as architects, engineers and planners; and the people. The y-axis shows the physical outputs through planning, building and infrastructure solutions. The content of the map presents the physical and social solutions, highlighting impact (the size of the circles), and the range, based on the 6 applicability measures presented in the conceptual framework. In many instances the applicability measures overlap, and the map therefore shows the most relevant measure for the particular case.

Policymakers and planners of the built environment who plan to adhere to such an approach should aim to achieve three major goals. Firstly, to deliver solutions that emphasise social place-making and capacity building – building communities whilst placing water at the forefront of communal consciousness, for example. Secondly, to implement resilient infrastructural solutions that are flexible but future-proof. Thirdly, to encourage solutions that are not all about hiding water in underground drainage networks, but rather integrate water into the social fabric of a community through planning, engineering and architectural design.

Collaborative working between policymakers and diverse stakeholders – including building professionals – is key to achieving this. Planners should work positively with architects and engineers to deliver the most effective solution possible within the individual context. Innovative architectural ideas and solutions should be encouraged and, further, the needs of the public should be fully integrated within the decision-making process. For this to happen, government departments will need to talk and work more effectively together at the national, regional and local levels. There also need to be better mechanisms to include knowledge agents and the public in solution-forming conversations; technologies such as smart web-tools, and innovative apps can help to facilitate this process.

 

[1] Read: Sahlins, M. D., and E. R. Service, editors. 1960. Evolution and culture. University of Michigan Press, Ann Arbor, Michigan, USA
[2] Roberts, C. (2013), Planning for Adaptation and Resilience, In: McGregor, A., Roberts, C., & Cousins, F. (Eds.). Two degrees: The built environment and our changing climate. Routledge.
[3] Part 1 of Dr Sona Datta’s BBC documentary series on the Treasures of the Indus may still be available on BBC Iplayer: http://www.bbc.co.uk/programmes/p030wckr/p030w89h

 

Sea-Changes in World Power

📥  Political history, Political ideologies, US politics

In 1907, Theodore Roosevelt sent the US Navy battle fleet – the “Great White Fleet” of 16 battleships – on a symbolic tour of the Pacific. It was an awesome demonstration of the USA’s new naval power and an announcement to the world of its claims to dominion over the Pacific. The fleet was feted everywhere it went, but particularly so in Australia and New Zealand, where it was welcomed as the “kith and kin of the Anglo-Saxon race” bringing “a grateful sense of security to the white man in his antipodean isolation.” Japan was a rising military power. It had annihilated the Russian fleet in 1905. Racist attitudes towards Japanese migrant workers were running high in the USA and Australasia. “Stars and Stripes, if you please/Protect us from the Japanese”, wrote a New Zealand correspondent.

whitefleetfinal

 

Roosevelt saw the fleet’s tour in similar terms. He was resolved to treat the Japanese government with courtesy and respect. But he wanted to assert the importance of keeping the world’s “races” apart, particularly when it came to migration into California, and he inflected his Social Darwinist arguments with a class populism: “we have got to protect our working men”, he was reported to have argued. “We have got to build up our western country with our white civilization, and…we must retain the power to say who shall and who shall not come to our country. Now it may be that Japan will adopt a different attitude, will demand that her people be permitted to go where they think fit, so I thought it wise to send that fleet around to the Pacific to be ready to maintain our rights”[1].

Roosevelt was heavily influenced by the naval strategist Admiral Alfred Mahan, whose books on the importance of sea power and naval strength were key military texts in the late 19th and early 20th century, read and absorbed not just by US foreign and defence policymakers, but by their counterparts in the capitals of all the leading world powers – including Great Britain, whose naval prowess he much admired. He was also highly influential on Roosevelt’s fifth cousin, Franklin D. Roosevelt, who devoured Mahan’s books as a young man and was a lifelong navy enthusiast, serving as Assistant Secretary for the Navy in Wilson’s administration. As President, FDR would massively expand the US Navy. Spending on the navy – a sort of naval Keynesianism – gave renewed impetus to the New Deal in the late 1930s.

Donald Trump’s speech at the Newport News shipyard, which builds ships for the US Navy, and his pledge to expand the fleet to 350 ships, therefore stands in a clearly defined lineage. It heralds a renewed commitment to assert the naval primacy of the USA and significantly boost military spending. On its own, that might be lifted straight out of the recent Republican playbook – particularly in concert with tax cuts for the wealthy. But Trump’s economic nationalism and his anti-Muslim, anti-immigration rhetoric also trace a line back to fin-de-siècle Anglo-Saxonist political discourse. His rhetoric symbolically connects the projection of economic and military power to the fortunes of the American working class, particularly the white working class – Teddy Roosevelt shorn of the progressivism and diplomatic tact.

This time, of course, the main antagonist is China, not Japan. China’s navy has been expanding rapidly under Xi Jinping’s leadership. It has commissioned new missile carriers, frigates, conventional and nuclear submarines, and amphibious assault ships. A close ally of Xi’s, Shen Jinlong, has recently been appointed its commander. It has moved from defensive coastal operations to long-range engagements around the world. It will serve to underpin China’s assertion of supremacy in the South China Sea and the projection of its power further afield – towards the Indian Ocean, the Gulf and the Maritime Silk Road routes.

The respective strength and reach of national navies can mark out wider shifts in geo-political power. It was at the Washington Conference in 1921 that the USA finally brought the Royal Navy to heel, insisting on parity in capital ships, and setting the seal on the end of the British Empire’s global maritime supremacy. “Never before had an empire of Britain’s stature so explicitly and consciously conceded superiority in such a crucial dimension of global power,” wrote Adam Tooze of this capitulation. It would take until the late 1960s, when Britain finally abandoned its bases East of Suez, for the process of imperial contraction to be complete (a decision that the current Foreign Secretary laments and risibly promises to reverse).

With tension rising in the South China Sea, war and rival power conflict in the Middle East and the Gulf region, and the prospect of a scramble for power over the sea lanes of the melting ice caps of the North West Passage, this new era of naval superpower rivalry echoes the Edwardian world. Steve Bannon, President Trump’s self-declared economic nationalist adviser, believes it will end the same way: in war. It is up to the rest of the world to prove him wrong.

 

 

[1] For this quotation and other source material, see Marilyn Lake and Henry Reynolds, Drawing the Global Colour Line, Cambridge: CUP (2008), Chapter 8 pp 190 - 209

 

Timing it wrong: Benefits, Income Tests, Overpayments and Debts

📥  Data, politics and policy, Welfare and social security

Professor Peter Whiteford is a Professor in the Crawford School of Public Policy at the Australian National University and Professor Jane Millar is a member of the Institute for Policy Research (IPR) Leadership Team, in addition to her role as Professor of Social Policy at the University of Bath.

Unexpected bills can be a challenge for any household. But for people who rely on social security payments, unexpected news of a significant debt – sometimes dating back years – can be bewildering to say the least. This is exactly what tens of thousands of Australians have experienced in recent months.

Since just before Christmas, Centrelink’s use of a new automated data-matching system has resulted in a significant increase in the number of current and former welfare recipients identified as having been overpaid and, thus, being in debt to the government. The data-matching system seems to have identified people with earned income higher than the amount reported when their benefits were calculated.

timing

 

Many of these people were alarmed when Centrelink contacted them about the assumed debt. Their stories have been recounted over the past two months in the mainstream media and social media. The controversy prompted the Shadow Human Services Minister Linda Burney to request an auditor-general’s investigation. After receiving more than one hundred complaints about problems with the debt-recovery process, independent MP Andrew Wilkie asked the Commonwealth Ombudsman to step in, and he has since launched an investigation. The Senate Community Affairs References Committee will also examine the new process.

This is by no means Australia’s first social security overpayment controversy. The last storm was sparked by the expansion and fine-tuning of family tax benefits in 2000. Under that new system, families were given the option of taking their payments as reductions in the income tax paid on their behalf by their employer. To ensure that this group was treated in the same way as those who received cash benefits from Centrelink, the government introduced an annual reconciliation process. Before the beginning of each financial year, families were asked to estimate what their income would be in the subsequent tax year; later, after they had filed their tax returns, an end-of-year reconciliation process would bring income and family benefits into line.

This seemed like a rational system. People who had been underpaid could receive a lump sum to ensure their correct entitlement. People who had been overpaid would pay back the money that they weren’t entitled to keep. The reconciliation would correct any mistakes people made when they estimated their income for the year ahead (not necessarily an easy task to get right!) and make the system responsive to changes in income during the year.

But many families’ estimates at the start of the year proved to be poor guides to income received during the year. This happened in both directions – some estimates were too high, some too low – but most often real annual incomes were higher than predicted. The result was a very large increase in overpayments and, thus, in debts. Before the new system was introduced, just over 50,000 families had debts at the end of each year; in the first year of the new system, an estimated 670,000 families received overpayments. Overall, around one third of eligible families incurred an overpayment in the first two years of the new system.

This is how the system was designed to work. But for the families who found themselves owing sometimes large and usually unexpected debts, the experience created confusion, stress and anger. It also generated considerable controversy in parliament and the media. So, in July 2001, just before an important by-election, the Howard government announced a waiver of the first $1,000 of all overpayments, which reduced the number of families with debts to around 200,000. Further fine-tuning came in 2002, also aimed at reducing overpayments and debts. Then, in 2004, an annual lump sum was added to family tax benefit A with the aim of offsetting any overpayments.

***

At around this time, Britain was designing and introducing a new system of tax credits for people in work (the working tax credit) and for families with children (the child tax credit). The system had some features in common with the Australian approach, had some features in common with the Australian approach, including an end-of–year reconciliation. The British government was keen to avoid the sort of controversy that had blown up in Australia, so it included a mechanism for changing the level of tax credit not just at the end of the year but during the year as well.

The assessment for credits was initially made on the basis of gross family income in the previous tax year. If recipients reported changes in income and circumstances during the year, then the award was adjusted, and at the end of the year total credits and income were reconciled. But many changes in income and circumstances went unreported during the year and so, in practice, considerable adjustment was required. Over the first few years of the system, about 1.9 million cases of excessive credits occurred each year.

As in Australia, the system caused significant hardship and generated adverse media coverage and much concern. In 2005 and 2006, the British government introduced a number of changes designed to reduce overpayments, including a very substantial increase in the level of the annual income “disregard” from £2,500 to £25,000. This meant that family income could rise by up to £25,000 in the current award year before tax credits were reduced. The amount has since been brought back to the original £2,500, which will probably mean overpayments will start to rise again. Processes exist for recovering overpayments of tax credits and housing benefits, and these sometimes attract some media attention, most recently in relation to the use of private debt collectors.

***

Together with the current Centrelink controversy, the experience of these earlier cases offers four main lessons for social security policy.

First, getting payments “right” in any means-tested system is a complex process necessarily involving trade-offs between responsiveness and simplicity. If the aim is to precisely match income and benefit in real time, then there must be constant updating and checking of income and adjustments of benefits. But such a system would be very intrusive and administratively complex. So systems are designed to pay first and reconcile later, which makes overpayments almost inevitable.

Governments can minimise the impact by disregarding some overpayments, as both Australia and Britain have done in the past. But that is not part of the design of Australia’s latest program of debt recovery. People are being chased partly because the Budget Savings (Omnibus) Act 2016 toughened repayment compliance conditions for social welfare debts. New conditions include an interest charge on the debts of former social welfare recipients who are unwilling to enter repayment arrangements, extended Departure Prohibition Orders for people who are not in repayment arrangements for their social welfare debts, and the removal of the six-year limitation on debt recovery for all social welfare debt.

People ardently dislike systems that they don’t understand and feel are unfair, or that seem to create debts beyond their control. A very stringent approach to collecting overpayments can cause real hardship and generate controversy. It has even been suggested that there may be a punitive element to this, with Centrelink staff not encouraged or required to help people to correct errors.

Second, IT systems are not by themselves the cause of these problems. It is easy to blame the technology when things go wrong, and some problematic factors do indeed appear to be technological. The names of employers provided to the Australian Tax Office and Centrelink don’t always match, for example, and it appears that in some cases the same income is counted twice because the assessment process matches names rather than Australian Business Numbers.

More significantly, Centrelink’s formula can produce false estimates of debts when individuals are asked to confirm their annual income reported to the Australian Tax Office, because it simply divides the reported annual wage by twenty-six. That overly simplified calculation will only produce a useful figure if individuals receive exactly the same income each fortnight, which is often not the case, especially for casual workers, students and other people with intermittent work patterns.

But these problems are not necessarily the fault of the IT, which is only doing what it has been designed to do. More checking by humans would probably reduce errors, but outcomes that result from the design of the policy can’t be resolved by technical fixes.

Third, IT systems are not by themselves the solution either. It is possible that the earlier problems with overpayments of family tax benefits may recur very soon. In early February, the federal government introduced a new omnibus savings bill to parliament, combining and revising several previously blocked welfare measures into a single piece of legislation in order to save nearly $4 billion over the next four years, after allowing for increased spending on childcare and family tax benefits. By far the most significant of the projected savings in the bill – $4.7 billion over four years – results from phasing out the end-of-year supplements for family tax benefit recipients, which were introduced to solve the overpayment and debt problems referred to earlier.

So why would the government think that the overpayment of family payments and the subsequent debt problem will be resolved, as this saving seems to assume? The answer is not entirely clear, but seems to relate to the update of Centrelink’s computer system announced in 2015. “The new technology to underpin the welfare system will offer better data analytics, real-time data sharing between agencies, and faster, cheaper implementation of policy changes,” Marise Payne, then human services minister, said at the time. “This means customers who fail to update their details with us will be less likely to have to repay large debts, and those who wilfully act to defraud taxpayers will be caught much more quickly.”

Complementing the Centrelink update are proposed changes in reporting systems at the Australian Tax Office, particularly the introduction of a single-touch payroll system. Under the new system, when employers pay their staff, the employees’ salary or wages and PAYG withholding amounts will automatically be reported to the Tax Office, which can then share this data with Centrelink.

The government seems to be assuming that computer and system updates will provide a technological fix to the problem of family tax benefit overpayments – and thus deliver a saving of $4.7 billion over the next four years. But what if the new IT systems don’t work in the ways envisaged? The Australian Tax Office’s computer system has crashed a number of times over the past year. Indeed, in the very same week that the government introduced the new omnibus savings bill, newspaper reports of this “tech wreck” suggested that the Tax Office might not be able to guarantee this year’s lodgement of returns in time for the start of the new financial year. The reports also noted that the development of the single-touch payroll system would remain one of the Tax Office’s priorities for this year.

Finally, to reiterate our first point, these problems have arisen from policy choices and design. Britain is introducing a new system, Universal Credit, which will use real-time adjustments to track changes in earnings and seek to match awards to income on a monthly basis. How well this will work in practice remains to be seen. In both countries, trends towards more insecure and variable employment patterns – and hence irregular pay packets – will make balancing accuracy and timeliness in means-tested welfare benefits more difficult. The assumption of regular and unchanging income no longer holds, and this new reality requires a policy, not a technical, solution.

This piece originally appeared on INSIDE STORY.

 

Labour’s weakness leaves the Tories free to do as they please

📥  Democracy and voter preference, Political ideologies

This article first appeared in the Financial Times.

Soul-searching about the electoral prospects of the Labour party has been a British political pastime for decades. After Labour’s defeat at the 1959 general election, Anthony Crosland, the party’s pre-eminent revisionist intellectual, published a Fabian pamphlet entitled “Can Labour Win?” His argument was that economic growth had shrunk the industrial working class and swelled the ranks of an affluent middle class, transforming the electoral battleground on which Labour had to fight.

punchingbag

 

Pamphlets and polemics have been published with variations on that theme ever since, always after Labour has lost elections. With the exception of a bout of civil war in the early 1980s, Labour has responded to each defeat by seeking to broaden its appeal and modernise its policies. In each era, it has succeeded in getting re-elected.

The results of Thursday’s by-elections paint a bleaker picture, however. It is not simply that Labour’s current leader, Jeremy Corbyn, is unpopular, or that his brand of reheated Bennism holds little appeal for most voters. The chances of his leading Labour into the next general election must now be considered minimal. It is that in the heyday of postwar social democracy, Labour won handsomely, whatever the national result, in seats like Copeland (which it lost on Thursday) and Stoke-on-Trent Central (which it held with a reduced majority).

Since then, three things have happened in these constituencies and others like them: turnout has fallen dramatically, the number of parties contesting the seats has multiplied and the Labour majority has been slashed. The party’s grip on power in its historic strongholds is now more tenuous than at any time since the 1930s, when it was split and faced a popular National government.

Until relatively recently, Labour could rely on its working-class supporters, even as the industrial society that shaped their allegiances steadily disappeared. Today, age and social class inequalities in voting patterns work decisively against the party. Older, middle-class voters turn out in much greater numbers than working-class and younger voters, which disproportionately benefits the Conservatives. Theresa May has been adept at consolidating this older voting bloc behind her government.

The prime minister has used the Brexit vote to offer a new configuration of Conservative politics that is both Eurosceptic and post-Thatcherite, detaching the interventionist, One Nation economic and social traditions of the party (at least in rhetoric, if not yet in practice) from its enfeebled pro-European wing. It is an electorally potent combination, which has had the effect, not just of boxing Labour into liberal, metropolitan Britain, but of holding down the UK Independence party’s vote.

Breathless post-Brexit talk of Ukip eating away the core Labour vote in the north of England has now given way to a more sophisticated appreciation of the flows of voters between the parties — flows from which the Conservatives, and to a lesser degree the Liberal Democrats, appear to be the winners.

Britain’s new electoral geography has also undermined Labour. Once, the party could bring battalions of MPs to Westminster from Scotland, Wales and northern England, where it was indisputably dominant. Now it fights on different fronts against multiple parties across the UK, a national party in a fracturing union. In Scotland, its support has been cannibalised by the Scottish National party, while the Conservatives have picked up the unionist vote there.

In Wales, party allegiances have split in different directions, while in England, the collapse of the Liberal Democrats at the last general election handed a swath of seats to the Conservatives. The EU referendum added another layer of complexity, splitting coastal, rural and post-industrial areas from cities and university towns, and leaving Labour facing in different directions, trying to hold together a coalition of voters with divergent views.

Any Labour leader would struggle in these circumstances — renewing the party’s fortunes at a time of national division is a monumental task. But it is now clear that the surge of support for Mr Corbyn in 2015 was less a new social movement giving energy and purpose to the Labour party, than a planetary nebula collecting around a dying star.

Labour’s weaknesses leave pro-Europeans bereft of political leadership at a critical time. In the absence of an effective opposition that can marshal blocking votes in parliament, the government is able to conduct the politics of Brexit internally. Countervailing forces are restricted to alternative centres of power, such as Scotland or London, and civil society campaigns that are only just starting to form. Big business is curiously mute and the trade unions have other priorities. On the most important question facing Britain, political power is dangerously lopsided.

Yet there are still grounds for optimism on the left, however small. Britain’s radical political traditions — liberal, as well as social democratic — are resilient and resourceful ones, particularly when they combine forces. The defeats inflicted on progressive parties in recent elections around the world have been narrow not decisive, suggesting that talk of a nationalist turn in the tide of history is overblown. While British Conservatism may be remarkably adaptive, Brexit will be a severe test of it.

Five years after Crosland posed the question of whether Labour could win, Harold Wilson became prime minister in a blaze of the “white heat” of technology. It will not be Mr Corbyn, and it will take a lot longer this time, but Wilson may yet have a successor who can do the same.

 

Shifting the public conversation on mental health – understanding the social conditions that shape private troubles

📥  Evidence and policymaking, Health

Professor Simone Fullagar is Professor of Sport and Physical Cultural Studies in the University of Bath's Department for Health

Mental health professionals, NGOs and a variety of service-user groups have all called for greater funding for local and global mental health services, as well as for greater parity of esteem between these services and broader health policy and service provision in the UK. The Mental Health Taskforce’s 2016 report details the need to address chronic under-spending on mental health services in the UK as demand continues to increase and inequalities widen. NHS spending is increasing in areas that support a medicalised response to mental health issues, with prescriptions for antidepressant medication doubling over the last decade in the UK. The taskforce’s report recommends a billion-pound investment in 2020/21 and calls for fresh thinking to shift cultural attitudes that stigmatise mental ill health as an individualised problem. Recently Theresa May announced a review of child and adolescent services in England and Wales and investment in mental health first aid training for schools. This is an important step, but how far will it go, given that from 2010 to 2015 there was a reduction of 5.4% in the funding of child and adolescent mental health services in the UK?

greenribbon

 

Young people are a major focus of concern, as they suffer from high rates of depression, anxiety, eating disorders and are vulnerable to developing more severe and enduring conditions. National survey data indicates a worsening picture for young women (15-18), who have the highest rates of depression and anxiety in the UK. Suicide rates have increased, with young men experiencing higher rates of suicide than young women, who in turn have higher rates of hospital admission for self-harm. One in four (26%) women aged 16 to 24 identify as having anxiety, depression, panic disorder, phobia or obsessive compulsive disorder.

The case for greater funding for mental health services is supported by a growing body of evidence which points to the value of investing in appropriate support and early intervention. Recent psychological research in the UK identified how different therapeutic approaches (cognitive behavioural therapy (CBT) and psychosocial interventions) for adolescent depression have been found to have similar beneficial effects. Across different approaches there is a common thread emphasising the importance of developing a ‘therapeutic alliance’ with a young person so they are able to effectively engage with support (feeling heard and respected, avoiding further stigmatisation, being involved in coproducing services, etc). This question of what works best for young people with a range of needs and diverse social backgrounds is an important one, given the role of the Improving Access to Psychological Therapies programme in increasing access to psychological therapy via CBT as a technical formula. Research has identified that 40–60% of young people who start psychological treatment also drop out against advice. A high proportion of people also do not seek help from professionals despite the recurrence of common mental health issues. All these factors point to the complexities surrounding clinical and community-based mental health provision. A positive shift in recent years has been an increasing recognition of the importance of involving people with lived experiences in the coproduction of localised services that move beyond privileging biomedical treatments, and support a recovery-oriented approach (for example, the Wellbeing College for adults has been created in Bath).

While this focus on funding more personalised support is incredibly important for people experiencing all kinds of distress, we also need broader public conversations and policy approaches that offer a critical understanding of how private troubles connect with our public lives to acknowledge the social determinants of mental health. Mental health problems are associated with social injustice, marginalisation and the embodied distress of trauma – poverty, discrimination (class, gender, sexuality, ethnicity etc), poor housing, unemployment, social isolation, gender-based violence, childhood abuse and intensified bullying in the digital age. In the context of austerity measures and cuts to public funding across a range of areas, it is perhaps not surprising that private troubles and social suffering are exacerbated.

Mental health and illness are also highly contested concepts with diverse, and often competing, trajectories of thought about biopsychosocial causes and conceptualisations of distress. Public knowledge of ‘mental illness’ is historically shaped by our diagnostic cultures of psy-expertise (from the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) to digital self-assessments), the rise of brain science and research funded by Big Pharma, and the less-often-heard accounts of those with lived experiences (including a diverse range of identities – service users, consumers, and members of anti-psychiatry, hearing voices and mad pride movements). While there is often great media interest in studies claiming to identify the biological cause of problems in the brain (often visualised via high tech images), many people would be surprised to know that there are no specific biomarkers for ‘mental illness’ – and theories about why anti-depressant medication works for some people (and with similar effects to placebo and other non-pharmacological treatments), are based on hypothesis rather than fact.

If we look at the national data cited earlier we can see how gender figures as an important variable – yet there is a curious absence of gender analysis in the context of mental health policy and service provision despite the growing research in this area.  My own sociological research into women’s experiences of depression and recovery identified the often highly problematic effects of antidepressant medication that was prescribed to help them recover. Women spoke of how their embodied distress was heightened by side-effects, and how feelings of emotional numbness exacerbated their sense of ‘failing’ to recover despite following expert biomedical advice. Suicidal thoughts and attempts were evident alongside guilt about not living up to the normative ‘good woman ideals’ of self-sacrificing mother, productive worker or caring wife. Others identified a feeling of being paradoxically trapped in a sense of dependency on a drug that helped them to feel more ‘normal’ and thus able to manage the gendered inequalities and pressures of their lives with demanding caring roles, work or unemployment. Restrictive gender norms, experiences of inequality that intersect with class, ethnicity, religion, sexuality and age, as well as a lack of gender-sensitive provision within mental health services and beyond (childcare, housing, domestic violence support, access to low-cost community activities that support well-being) were key policy related issues. The policy challenge ahead of us is to understand the complexity of how mental health is affected by, and affects, all aspects of social life. Social science research has a unique contribution to making critical issues (such as gender inequalities) visible in the development of a whole range of approaches, decision-making processes about resources and public dialogue about how we understand the social conditions that shape distress and support wellbeing in the contemporary era.

 

The World in 2050 and Beyond: Part 3 - Science and Policy

📥  Evidence and policymaking, Science and research policy, Security and defence

Lord Rees of Ludlow is Astronomer Royal at the University of Cambridge's Institute of Astronomy, and founder of the Centre for the Study of Existential Risk. This blog post, the third in a three-part series, is based on a lecture he gave at the IPR on 9 February. Read the first part here, and the second part here.

Even in the 'concertina-ed' timeline that astronomers envisage – extending billions of years into the future, as well as into the past – this century may be a defining era. The century when humans jump-start the transition to electronic (and potentially immortal) entities that eventually spread their influence far beyond the Earth, and far transcend our limitations. Or – to take a darker view – the century where our follies could foreclose this immense future potential.

Beaker

 

One lesson I’d draw from these existential threats is this. We fret unduly about small risks – air crashes, carcinogens in food, low radiation doses, etc. But we’re in denial about some newly emergent threats, which may seem improbable but whose consequences could be globally devastating. Some of these are environmental, others are the potential downsides of novel technologies.

So how can scientists concerned about these issues – or indeed about the social impact of any scientific advances – gain traction with policy-makers?

Some scientists, of course, have a formal advisory role to government. Back in World War II, Winston Churchill valued scientists' advice, but famously kept them "on tap, not on top". It is indeed the elected politicians who should make decisions. But scientific advisers should be prepared to challenge decision-makers, and help them navigate the uncertainties.

President Obama recognised this. He opined that scientists' advice should be heeded "even when it is inconvenient – indeed, especially when it is inconvenient". He appointed John Holdren, from Harvard, as his science adviser, and a ‘dream team’ of others were given top posts, including the Nobel physicist Steve Chu. They had a predictably frustrating time, but John Holdren 'hung in there' for Obama’s full eight years. And of course we’re anxious about what will happen under the new regime!

Their British counterparts, from Solly Zuckerman to Mark Walport, have it slightly easier. The interface with government is smoother, the respect for evidence is stronger, and the rapport between scientists and legislators is certainly better.

For instance, dialogue with parliamentarians led, despite divergent ethical stances, to a generally-admired legal framework on embryos and stem cells – a contrast to what happened in the US. And the HFEA offers another fine precedent.

But we've had failures too: the GM crop debate was left too late – to a time when opinion was already polarised between eco-campaigners on the one side and commercial interests on the other.

There are habitual grumbles that it’s hard for advisors to gain sufficient traction. This isn’t surprising. For politicians, the focus is on the urgent and parochial – and getting re-elected. The issues that attract their attention are those that get headlined in the media, and fill their in-box.

So scientists might have more leverage on politicians indirectly – by campaigning, so that the public and the media amplify their voice, for example – rather than via more official and direct channels. They can engage by involvement with NGOs, via blogging and journalism, or through political activity. There’s scope for campaigners on all the issues I’ve mentioned, and indeed many others. For instance, the ‘genetic code’ pioneer John Sulston campaigns for affordable drugs for Africa.

And I think religious leaders have a role. I’m on the council of the Pontifical Academy of Sciences (which is itself an ecumenical body: its members represent all faiths or none). Max Perutz, for instance, was in a group of four who acted as emissaries of the Pope to promote arms control. And recently, my economist colleague Partha Dasgupta, along with Ram Ramanathan, a climate scientist – two lapsed Hindus! – achieved great leverage by laying the groundwork for the Papal encyclical on climate and environment.

There’s no gainsaying the Catholic Church’s global reach – nor its long-term perspective, nor its concern for the world’s poor. The Encyclical emphasised our responsibility to the developing world, and to future generations. In the lead-up to the Paris conference it had a substantial and timely influence on voters and leaders in Latin America, Africa and East Asia (even perhaps in the US Republican Party).

Science is a universal culture, spanning all nations and faiths. So scientists confront fewer impediments to straddling political divides. The Pugwash Conferences did this in the Cold War – and the governing board of Sesame, a physics project in Jordan, gets Israelis and Iranians around the same table today.

Of course, most of these challenges are global. Coping with potential shortages of food, water, resources – and the transition to low carbon energy – can’t be affected by each nation separately. Nor can threat reduction. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness. Indeed, a key issue is whether nations need to give up more sovereignty to new organisations along the lines of the IAEA, WHO, etc., And whether national academies, The World Academy of Sciences, and similar bodies should get more involved.

Universities are among the most international of our institutions, and they have a special role. Academics are privileged to have influence over successive generations of students. Indeed, younger people, who expect to survive most of the century, are more anxious about long-term issues, and more prepared to support ‘effective altruism’ and other causes.

Universities are highly international institutions. We should use their convening power to gather experts together to address the world's problems. That’s why some of us in Cambridge (with an international advisory group) have set up the Centre for the Study of Existential Risks, with a focus on the more extreme ‘low probability/high consequence’ threats that might confront us. They surely deserve expert analysis in order to assess which can be dismissed firmly as science fiction, and which should be on the ‘risk register’; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control. Even if we reduced these risks by only a tiny percentage, the stakes are so high that we’ll have earned our keep. A wise mantra is that ‘the unfamiliar is not the same as the improbable’.

I think scientists should all be prepared to divert some of their efforts towards public policy, and engage with individuals from government, business, and NGOs. There is in the US, incidentally, one distinctive format for such engagement that has no real parallel here. This is the JASON group. It was founded in the 1960s with support from the Pentagon. It involves top-rank academic scientists – in the early days they were mainly physicists, but the group now embraces other fields. They’re bankrolled by the Defense Department, but it’s a matter of principle that they choose their own new members. Some – Dick Garwin and Freeman Dyson, for instance – have been members since the 1960s. The JASONs spend about 6 weeks together in the summer, with other meetings during the year. It’s a serious commitment. The sociology and ‘chemistry’ of such a group hasn’t been fully replicated anywhere else. Perhaps we should try to do so in the UK, not for the military but in civilian areas – the remit of DEFRA, for instance, or the Department of Transport. The challenge is to assemble a group of really top-rank scientists who enjoy cross-disciplinary discourse and tossing ideas around. It won’t ‘take off’ unless they dedicate substantial time to it – and unless the group addresses the kind of problems that play to their strengths.

So to sum up, I think we can truly be techno-optimists. The innovations that will drive economic advance, information technology, biotech and nanotech, can boost the developing as well as the developed world – but there’s a depressing gap between what we could do and what actually happens. Will richer countries recognise that it's in their own interest for the developing world fully to share the benefits of globalisation? Can nations sustain effective but non-repressive governance in the face of threats from small groups with high-tech expertise? And – above all – can our institutions prioritise projects that are long-term in political perspectives, even if a mere instant in the history of our planet?

We’re all on this crowded world together. Our responsibility – to our children, to the poorest, and to our stewardship of life’s diversity – surely demands that we don’t leave a depleted and hazardous world. I give the last word to the eloquent biologist Peter Medawar:

“The bells that toll for mankind are [...] like the bells of Alpine cattle. They are attached to our own necks, and it must be our fault if they do not make a tuneful and melodious sound.”

 

For more information on Lord Rees' IPR lecture, please see our writeup here.

 

How could a global public database help to tackle corporate tax avoidance?

📥  Data, politics and policy, Economics

Dr Jonathan Gray is Prize Fellow at the IPR. This post is based on a newly published research report which he contributed to.

The multinational corporation has become one of the most powerful and influential forms of economic organisation in the modern world. Emerging at the bleeding edge of colonial expansion in the seventeenth century, entities such as the Dutch and British East India Companies required novel kinds of legal, political, economic and administrative work to hold their sprawling networks of people, objects, resources, activities and information together across borders. Today it is estimated that over two thirds of the world’s biggest economic entities are corporations rather than countries.

shutterstock_403395004 [Converted]

 

Our lives are permeated by and entangled with the activities and fruits of these multinationals. We are surrounded by their products, technologies, platforms, apps, logos, retailers, advertisements, publications, packaging, supply chains, infrastructures, furnishings and fashions. In many countries they have assumed the task of supplying societies with water, food, heat, clothing, transport, electricity, connectivity, information, entertainment and sociality. We carry their trackers and technologies in our pockets and on our screens. They provide us not only with luxuries and frivolities, but the means to get by and to flourish as human beings in the contemporary world. They guide us through our lives, both figuratively and literally. The rise of new technologies means that corporations may often have more data about us than states do – and more data than we have about ourselves.

Pasted Graphic 1

Shipyard of the Dutch East India Company in Amsterdam, 1750. Wikipedia.

But what do we know about them? What are these multinational entities – and where are they? What do they bring together? What role do they play in our economies and societies? Are their tax contributions commensurate with their profits and activities? Where should we look to inform legal, economic and policy measures to shape their activities for the benefit of society, not just shareholders?  At the moment these questions are surprisingly difficult to answer – at least in part due to a lack of publicly available information. We are currently on the brink of a number of important policy decisions which will have a lasting effect on what we are able to know and how we are able to respond to these mysterious multinational giants.A wave of high-profile public controversies, mobilisations and interventions around the tax affairs of multinationals followed in the wake of the 2007-2008 financial crisis. Tax justice and anti-austerity activists have occupied high street stores in order to protest multinational tax avoidance. A group of local traders in Wales sought to move their town offshore in order to publicise and critique the legal and accountancy practices used by multinationals. One artist issued fake certificates of incorporation for Cayman Island companies to highlight the social costs of tax avoidance. Corporate tax avoidance came to epitomise economic globalisation with an absence of corresponding democratic societal controls.

Screenshot 2017-02-13 21.03.13

Image from report on IKEA’s tax planning strategies. Greens/EFA Group in European Parliament.

This public concern after the crisis prompted a succession of projects from various transnational groups and institutions. The then-G8 and G20 committed to reducing the “misalignment” between the activities and profits of multinationals. The G20 tasked the OECD with launching an initiative dedicated to tackling tax “Base Erosion and Profit Shifting” (BEPS). The OECD BEPS project surfaced different ways of understanding and accounting for multinational companies – including questions such as what they are, where they are, how to calculate where they should pay money, and by whom they should be governed.

For example, many industry associations, companies, institutions and audit firms advocated sticking to the “arms length principle” which would treat multinationals as a group of effectively independent legal entities. On the other hand, civil society groups and researchers called for “unitary taxation”, which would treat multinationals as a single entity with operations in multiple countries. The consultation also raised questions about the governance of transnational tax policy, with some groups arguing that responsibility should shift from the OECD to the United Nations to ensure that all countries have a say – especially those in the Global South.

While many civil society actors highlighted the shortcomings and limitations of the OECD BEPS process, they acknowledged that one of its main coups was to obtain global institutional recognition for a proposal which had central to the “tax justice” agenda for the previous decade: “Country by Country Reporting” (CBCR), which would require multinationals to produce comprehensive, global reports on their economic activities and tax contributions, broken down by country. But there was one major drawback: it was suggested that this information should be shared between tax authorities, rather than being made public. Since the release of the the OECD BEPS final reports in 2015, a loose-knit network of campaigners have been busy working to make this data public.

Today we are publishing a new research report looking at the current state and future prospects of a global database on the economic activities and tax contributions of multinationals – including who might use it and how, what it could and should contain, the extent to which one could already start building such a database using publicly available sources, and next steps for policy, advocacy and technical work. It also highlights what is involved in making of data about multinationals, including social and political processes of classification and standardisation that this data depends on.

Pasted Graphic

Exhibition of Paolo Cirio’s “Loophole for All” in Basel, 2015. Paolo Cirio.

The report reviews several public sources of CBCR data – including from legislation introduced in the wake of the financial crisis. Under the Trump administration, the US is currently in the process of repealing and dismantling key parts of the Dodd-Frank Wall Street Reform and Consumer Protection Act, including Section 1504 on transparency in the extractive industry, which Oxfam recently described as the “brutal loss of 10 years of work”. Some of the best available public CBCR data is generated as a result of the European Capital Requirements Directive IV (CRD IV), which gives us an unprecedented (albeit often imperfect) series of snapshots of multinational financial institutions with operations in Europe.

The longer-term dream for many is a global public database housed at the United Nations, but until this is realised civil society groups may build their own. As well as being used as an informational resource in itself, such a database could be seen as form of “data activism” to change what public institutions count – taking a cue from citizen and civil society data projects to take measure of issues they care about from migrant deaths to police killings, literacy rates, water access or fracking pollution.

A civil society database could play another important role: it could be a means to facilitate the assembly and coordination of different actors who share an interest in the economic activities of multinationals. It would thus be not only a source of information, but also a mechanism for organisation – allowing journalists, researchers, civil society organisations and others to collaborate around the collection, verification, analysis and interpretation of this data. In parallel to ongoing campaigns for public data, a civil society database could thus be viewed as a kind of democratic experiment opening up space for public engagement, deliberation and imagination around how the global economy is organised, and how it might be organised differently.

In the face of an onslaught of nationalist challenges to the political and economic world-making projects of the previous century – not least through the “neoliberal protectionism” of the Trump administration – supporting the development of transnational democratic publics with an interest in understanding and responding to some of the world’s biggest economic actors is surely an urgent task.

This piece also appeared on openDemocracy.

The World in 2050 and Beyond: Part 2 - Technological Errors and Terrors

📥  Evidence and policymaking, Science and research policy, Security and defence

Lord Rees of Ludlow is Astronomer Royal at the University of Cambridge's Institute of Astronomy, and founder of the Centre for the Study of Existential Risk. This blog post, the second in a three-part series, is based on a lecture he gave at the IPR on 9 February. Read the first part here.

I think we should be evangelists for new technologies – without them the world can’t provide food, and sustainable energy, for an expanding and more demanding population. But we need wisely-directed technology. Indeed, many are anxious that it’s advancing so fast that we may not properly cope with it – and that we’ll have a bumpy ride through this century.

bio]

 

Let me expand on these concerns.

Our world increasingly depends on elaborate networks: electric-power grids, air traffic control, international finance, globally-dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns – real-world analogues of what happened in 2008 to the financial system. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel could spread a pandemic worldwide within a week, causing the gravest havoc in the shambolic megacities of the developing world. And social media can spread panic and rumour, and economic contagion, literally at the speed of light.

To guard against the downsides of such an interconnected world plainly requires international collaboration. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.

Advances in microbiology – diagnostics, vaccines and antibiotics – offer prospects of containing pandemics. But the same research has controversial aspects. For instance, in 2012, groups in Wisconsin and in Holland showed that it was surprisingly easy to make the influenza virus both more virulent and transmissible – to some, this was a scary portent of things to come. In 2014 the US federal government decided to cease funding these so-called ‘gain of function’ experiments.

The new CRISPR-cas technique for gene-editing is hugely promising, but there are ethical concerns raised by Chinese experiments on human embryos and by possible unintended consequences of ‘gene drive’ programmes.

Back in the early days of recombinant DNA research, a group of biologists met in Asilomar, California, and agreed guidelines on what experiments should and shouldn’t be done. This seemingly encouraging precedent has triggered several meetings to discuss recent developments in the same spirit. But today, 40 years after Asilomar, the research community is far more broadly international, and more influenced by commercial pressures. I’d worry that whatever regulations are imposed, on prudential or ethical grounds, can’t be enforced worldwide – any more than the drug laws can, or the tax laws. Whatever can be done will be done by someone, somewhere.

And that’s a nightmare. Whereas an atomic bomb can’t be built without large scale special-purpose facilities, biotech involves small-scale dual-use equipment. Indeed, biohacking is burgeoning even as a hobby and competitive game.

We know all too well that technical expertise doesn’t guarantee balanced rationality. The global village will have its village idiots and they’ll have global range. The rising empowerment of tech-savvy groups (or even individuals), by bio as well as cyber technology will pose an intractable challenge to governments and aggravate the tension between freedom, privacy and security.

Concerns about bioerror and bioterror are relatively near-term – within 10 or 15 years. What about 2050 and beyond?

The smartphone, the web and their ancillaries are already crucial to our networked lives. But they would have seemed magic even 20 years ago. So, looking several decades ahead, we must keep our minds open – or at least ajar – to transformative advances that may now seem science fiction.

On the bio front, the great physicist Freeman Dyson conjectures a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. If it becomes possible to ‘play God on a kitchen table’ (as it were), our ecology (and even our species) may not long survive unscathed.

And what about another transformative technology: robotics and artificial intelligence (AI)?

There have been exciting advances in what’s called generalised machine learning: Deep Mind (a small London company now bought up by Google) has just achieved a remarkable feat – its computer has beaten the world champion in a game of Go. Meanwhile, Carnegie-Mellon University has developed a machine that can bluff and calculate as well as the best human players of poker.

Of course it’s 20 years since IBM's 'Deep Blue' beat Kasparov, the world chess champion. But Deep Blue was programmed in detail by expert players. In contrast, the machines that play Go and poker gained expertise by absorbing huge numbers of games and playing against themselves. Their designers don’t themselves know how the machines make seemingly insightful decisions.

The speed of computers allows them to succeed by ‘brute force’ methods. They learn to identify dogs, cats and human faces by ‘crunching’ through millions of images – not the way babies learn. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!).

But advances are patchy. Robots are still clumsier than a child in moving pieces on a real chessboard. They can’t tie your shoelaces or cut old people’s toenails. But sensor technology, speech recognition, information searches and so forth are advancing apace.

They won’t just take over manual work (indeed plumbing and gardening will be among the hardest jobs to automate), but routine legal work (conveyancing and suchlike), medical diagnostics and even surgery.

Can robots cope with emergencies? For instance, if an obstruction suddenly appears on a crowded highway, can Google’s driverless car discriminate whether it’s a paper bag, a dog or a child? The likely answer is that its judgement will never be perfect, but will be better than the average driver – machine errors will occur, but not as often as human error. But when accidents do occur, they will create a legal minefield. Who should be held responsible – the ‘driver’, the owner, or the designer?

The big social and economic question is this: will this ‘second machine age’ be like earlier disruptive technologies – the car, for instance – and create as many jobs as it destroys? Or is it really different this time?

The money ‘earned’ by robots could generate huge wealth for an elite. But to preserve a healthy society will require massive redistribution to ensure that everyone has at least a ‘living wage’. A further challenge will be to create and upgrade public service jobs where the human element is crucial – carers for young and old, custodians, gardeners in public parks and so on – jobs which are now undervalued, but in huge demand.

But let’s look further ahead.

If robots could observe and interpret their environment as adeptly as we do, they would truly be perceived as intelligent beings, to which (or to whom) we can relate. Such machines pervade popular culture —in movies like Her, Transcendence and Ex Machina.

Do we have obligations towards them? We worry if our fellow-humans, and even animals, can’t fulfil their natural potential. Should we feel guilty if our robots are under-employed or bored?

What if a machine developed a mind of its own? Would it stay docile, or ‘go rogue’? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes, or even treat humans as an encumbrance.

Some AI pundits take this seriously, and think the field already needs guidelines – just as biotech does. But others regard these concerns as premature, and worry less about artificial intelligence than about real stupidity.

Be that as it may, it’s likely that society will be transformed by autonomous robots, even though the jury’s out on whether they’ll be ‘idiot savants’ or display superhuman capabilities.

There’s disagreement about the route towards human-level intelligence. Some think we should emulate nature, and reverse-engineer the human brain. Others say that’s as misguided as designing flying machine by copying how birds flap their wings. And philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs — so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life.

Ray Kurzweil, now working at Google, argues that once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones – an intelligence explosion. He thinks that humans could transcend biology by merging with computers. In old-style spiritualist parlance, they would 'go over to the other side'.

Kurzweil is a prominent proponent of this so-called ‘singularity’. But he’s worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of 'cryonic' enthusiasts – based in California – called the 'society for the abolition of involuntary death'. They will freeze your body, so that when immortality’s on offer you can be resurrected or your brain downloaded.

I told them I'd rather end my days in an English churchyard than a Californian refrigerator. They derided me as a 'deathist' – really old fashioned.

I was surprised to find that three academics in this country had gone in for cryonics. Two had paid the full whack; the third has taken the cut-price option of wanting just his head frozen. I was glad they were from Oxford, not from Cambridge – or Bath.

But of course, research on ageing is being seriously prioritised. Will the benefits be incremental? Or is ageing a ‘disease’ that can be cured? Dramatic life-extension would plainly be a real wild card in population projections, with huge social ramifications. But it may happen, along with human enhancement in other forms.

And now a digression into my special interest – space. This is where robots surely have a future.

During this century the whole solar system will be explored by flotillas of miniaturised probes – far more advanced than ESA’s Rosetta, or the NASA probe that transmitted amazing pictures from Pluto, which is 10,000 times further away than the moon. These two instruments were designed and built 15 years ago. Think how much better we could do today. And later this century giant robotic fabricators may build vast lightweight structures floating in space (gossamer-thin radio reflectors or solar energy collectors, for instance) using raw materials mined from the Moon or asteroids.

Robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots into deep space, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the Space Station – and has recently achieved a soft recovery of the rocket’s first stage, rendering it reusable. Musk hopes soon to offer orbital flights to paying customers.

Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon – voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told they’ve sold a ticket for the second flight – but not for the first.

We should surely acclaim these private enterprise efforts in space; they can tolerate higher risks than a western government could impose on publicly-funded bodies, and thereby cut costs compared to NASA or the ESA. But these they should be promoted as adventures or extreme sports – the phrase ‘space tourism’ should be avoided. It lulls people into unrealistic confidence.

By 2100 courageous pioneers in the mould of (say) the British adventurer Sir Ranulph Fiennes – or Felix Baumgartner, who broke the sound barrier in freefall from a high-altitude balloon – may have established ‘bases’ independent from the Earth, on Mars, or maybe on asteroids. Musk himself (aged 45) says he wants to die on Mars – but not on impact.

But don’t ever expect mass emigration from Earth. Nowhere in our solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth's problems. There’s no ‘Planet B’.

Indeed, Space is an inherently hostile environment for humans. For that reason, even though we may wish to regulate genetic and cyborg technology on Earth, we should surely wish the space pioneers good luck in using all such techniques to adapt to alien conditions. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

As an astronomer I’m sometimes asked: ‘does contemplation of huge expanses of space and time affect your everyday life?’ Well, having spent much of my life among astronomers, I have to tell you that they’re not especially serene, and fret as much as anyone about what happens next week or tomorrow. But they do bring one special perspective – an awareness of the far future. Let me explain.

The stupendous timespans of the evolutionary past are now part of common culture (outside ‘fundamentalist’ circles, at any rate). But most people still tend to regard humans as the culmination of the evolutionary tree. That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it's got 6 billion more before the fuel runs out, and the expanding universe will continue – perhaps forever. To quote Woody Allen, eternity is very long, especially towards the end. So we may not even be at the half-way stage of evolution.

It may take just decades to develop human-level AI – or it may take centuries. Be that as it may, it’s but an instant compared to the cosmic future stretching ahead.

There must be chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But fewer limits constrain electronic computers (still less, perhaps, quantum computers); for these, the potential for further development could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the future cogitations of AI.

Moreover, the Earth’s environment may suit us ‘organics’ – but interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop greater powers than humans can even imagine.

I’ve no time to speculate further beyond the flakey fringe – perhaps a good thing! So let me conclude by focusing back more closely on the here and now.

For more information on Lord Rees' IPR lecture, please see our writeup here.

 

The World in 2050 and Beyond: Part 1 - The Ever-Heavier Footprint

📥  Energy and environmental policy, Food and agriculture, Science and research policy

Lord Rees of Ludlow is Astronomer Royal at the University of Cambridge's Institute of Astronomy, and founder of the Centre for the Study of Existential Risk. This blog post, the first in a three-part series, is based on a lecture he gave at the IPR on 9 February.

A few years ago, I met a well-known Indian tycoon. Knowing that I had the title of Astronomer Royal, he asked: ‘do you do the queen’s horoscopes?’ I responded, with a straight face: ‘If she wanted one, I’m the person she’d ask’. He then seemed eager to hear my predictions. I told him that stocks would fluctuate, there’d be new tensions in the Middle East, and so forth. He paid rapt attention to these ‘insights’. But I then came clean. I said I was just an astronomer – not an astrologer. He then lost all interest in my predictions. And rightly so; scientists are rotten forecasters – almost as bad as economists.

shutterstock_106497899 [Converted]

 

Nor do politicians and lawyers have a sure touch. One rather surprising futurologist was Lord Birkenhead, crony of Churchill and Lord Chancellor in the 1920s. He wrote a book entitled ‘The World in 2030’. He’d read Wells and Bernal – he envisaged babies incubated in flasks, flying cars and suchlike fantasies. In contrast, he foresaw social stagnation.

Here’s a quotation: “In 2030 women will still, by their wit and charms, inspire the most able men towards heights that they could never themselves achieve.”

I’m going to make forecasts, but – mindful of these precedents – very tentatively.

Astronomers think in billions of years. But even in that perspective this century is special. The Earth has existed for 45 million centuries – humans for a few thousand centuries. But this century is special: it’s the first when one species, ours, has the planet’s future in its hands. We’re deep in an era that’s called the Anthropocene. We could irreversibly degrade the biosphere, we could trigger the transition from biological to electronic intelligences, or misdirected technology – bio or cyber – could cause a catastrophic setback to civilisation.

Twelve years ago I wrote a book on this theme which I entitled Our Final Century? My publisher deleted the question-mark. The American publishers changed the title to 'Our Final Hour'. (Americans seek instant gratification – and the converse).

I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating setbacks – and we’ve had one lucky escape already.

At any time in the Cold War era – when armament levels escalated beyond all reason – the superpowers could have stumbled towards armageddon through muddle and miscalculation.

Nuclear weapons are based on 20th century science. I’ll focus later in my argument on 21st century sciences – bio, cyber, and AI – which offer huge potential benefits, but also expose us to novel vulnerabilities

But before that let’s focus on the long-term threats that stem not from conscious decisions, bur from humanity’s ever-heavier collective ‘footprint’. Even with a cloudy crystal ball there are some things we can predict. For instance, it’s almost inevitable that by mid-century, the world will be more crowded.

Fifty years ago, world population was about 3 billion. It now exceeds 7 billion. But the growth is slowing. Indeed, the number of births per year, worldwide, peaked a few years ago and is going down. Nonetheless world population is forecast to rise to around 9 billion by 2050. That’s partly because most people in the developing world are young. They are yet to have children, and they will live longer. The age histogram in the developing world will become more like it is in Europe.

Experts predict continuing urbanisation – 70 percent of people in cities by 2050. Even by 2030 Lagos, São Paulo and Delhi will have populations above 30 million. To prevent megacities becoming turbulent dystopias will surely be a major challenge to governance.

Population growth seems currently under-discussed. That is maybe because doom-laden forecasts in the 1970s, by the Club of Rome, Paul Erlich and others, have proved off the mark. Up until now, food production has more than kept pace – famines stem from wars or maldistribution, not overall shortage. And it’s deemed by some a taboo subject – tainted by association with eugenics in the 1920s and 30s, with Indian policies under Indira Gandhi, and more recently with China's hard-line one-child policy.

Can 9 billion people be fed? My layman’s impression from reading the work of experts is that the answer’s yes. Improved agriculture – low-till, water-conserving, and perhaps involving GM crops – together with better engineering to reduce waste, improve irrigation, and so forth, could sustainably feed that number by mid-century. The buzz-phrase is ‘sustainable intensification’.

But there will need to be lifestyle changes. The world couldn't sustain even its present population if everyone lived like Americans do today– using as much energy per person and eating as much beef.

Population trends beyond 2050 are harder to predict. They will depend on what people now in their teens and 20s decide about the number and spacing of their children. Enhanced education and the empowerment of women – surely a benign priority in itself – could reduce fertility rates where they’re now highest. And the demographic transition hasn’t reached parts of India and Sub-Saharan Africa.

If families in Africa remain large, then according to the UN that continent’s population could double again by 2100 to 4 billion, thereby raising the global population to 11 billion. Nigeria alone would by then have as big a population as Europe and North America combined, and almost half of all the world’s children would be in Africa.

Optimists remind us that each extra mouth brings also two hands and a brain. Nonetheless, the higher the population becomes, the greater will be all pressures on resources – especially if the developing world narrows its gap with the developed world in its per capita consumption – and the harder it will be for Africa to escape the ‘poverty trap’. So we must surely hope that the global figure declines rather than rises after 2050.

Moreover, if humanity’s collective impact on nature pushes too hard against what Johan Rockstrom calls ‘planetary boundaries’, the resultant ‘ecological shock’ could irreversibly impoverish our biosphere. Extinction rates are rising; we’re destroying the book of life before we’ve read it. Biodiversity is a crucial component of human wellbeing. We're clearly harmed if fish stocks dwindle to extinction; there are plants in the rainforest whose gene pool might be useful to us. But for many environmentalists, preserving the richness of our biosphere has value in its own right, over and above what it means to us humans. To quote the great ecologist E O Wilson, ‘mass extinction is the sin that future generations will least forgive us for’.

The world’s getting more crowded. And there’s a second firm prediction: it will gradually get warmer. In contrast to population issues, climate change is certainly not under-discussed.

The famous Keeling curve shows how the concentration of CO2 in the air is rising, mainly due to the burning of fossil fuels. It’s still unclear how much the climatic effects of rising CO2 are amplified by associated changes in water vapour and clouds. The fifth IPCC report presents a spread of projections.

But despite the uncertainties there are two messages that most would agree on:

  1. Regional disruptions to weather patterns within the next 20-30 years will aggravate pressures on food and water, and engender migration.
  2. Under ‘business as usual’ scenarios we can’t rule out, later in the century, really catastrophic warming, and tipping pints triggering long-term trends like the melting of Greenland’s icecap.

But even those who accept both these statements have diverse views on the policy response. It’s important to realise that these divergences stem less from differences about the science than from differences in economics and ethics – in particular, in how much obligation we should feel towards future generations.

Economists who apply a standard discount rate (as, for instance, Bjorn Lomberg’s Copenhagen Consensus does) are in effect writing off what happens beyond 2050 – so unsurprisingly they downplay the priority of addressing climate change in comparison with shorter-term efforts to help the world’s poor.

But if you care about those who’ll live into the 22nd century and beyond, then, as economists like Stern and Weizman argue, you deem it worth paying an insurance premium now, to protect those generations against the worst-case scenarios.

So, even those who agree that there’s a significant risk of climate catastrophe a century hence will differ in how urgently they advocate action today. Their assessment will depend on expectations of future growth, and optimism about technological fixes. But, above all, it will depend on an ethical issue – in optimising people’s life-chances, should we discriminate on grounds of date of birth?

(As a parenthesis, I’d note that there’s one policy context where a discount rate of essentially zero is applied – radioactive waste disposal, where the depositories are required to prevent leakage for 10,000 years. This is somewhat ironic, when we can’t plan the rest of energy policy even 30 years ahead)

Consider this analogy. Suppose astronomers had tracked an asteroid, and calculated that it would hit the Earth in 2080, 65 years from now – not with certainty, but with (say) 10 per cent probability. Would we relax, saying that it’s a problem that can be set on one side for 50 years – people will then be richer, and it may turn out then that it’s going to miss the Earth anyway? I don’t think we would. There would surely be a consensus that we should start straight away and do our damnedest to find ways to deflect it, or mitigate its effects.

What will actually happen on the climate-policy front? The pledges made at the Paris conference are a positive step.

But even if they’re honoured, CO2 concentrations will rise steadily throughout the next 20 years. By then, we'll know with far more confidence – from a longer timebase of data, and from better modelling – just how strong the feedback from water vapour and clouds actually is. If the so-called ‘climate sensitivity’ is low, we’ll relax. But if it’s large, and climate consequently seems on an irreversible trajectory into dangerous territory, there may then be a pressure for 'panic measures'. This could involve a 'plan B' – being fatalistic about continuing dependence on fossil fuels, but combatting its effects by either a massive investment in carbon capture and storage, or else by geoengineering.

It’s feasible to inject enough aerosols into the stratosphere to cool the world’s climate – indeed, what is scary is that this might be within the resources of a single nation, or even a single corporation. There could be unintended side-effects; moreover, the warming would return with a vengeance if the countermeasures were ever discontinued – and other consequences of rising CO2 (especially the deleterious effects of ocean acidification) would be unchecked.

Geoengineering would be a political nightmare: not all nations would want to adjust the thermostat the same way. Very elaborate climatic modelling would be needed in order to calculate the regional impacts of an artificial intervention. (The only beneficiaries would be lawyers. They’d have a bonanza if nations could litigate over bad weather!).

I think it’s prudent to explore geoengineering techniques enough to clarify which options make sense, and perhaps damp down undue optimism about a technical 'quick fix' for our climate.

Many still hope that our civilisation can segue smoothly towards a low-carbon future. But politicians won't gain much resonance by advocating a bare-bones approach that entails unwelcome lifestyle changes – especially if the benefits are far away and decades into the future. But three measures that could mitigate climate change seem politically realistic.

First, all countries could improve energy-efficiency, insulate buildings better, and so forth—and thereby actually save money.

Second, we could target cuts to methane, black carbon and CFC emissions. These are subsidiary contributors to long-term warming. But unlike CO2, they cause local pollution too – in Chinese cities, for instance – so there’s a stronger incentive to reduce them.

But third, nations should expand R&D into all forms of low-carbon energy generation (renewables, 4th generation nuclear, fusion, and the rest), and into other technologies where parallel progress is crucial – especially storage (batteries, compressed air, pumped storage, flywheels, etc) and smart grids. That’s why an encouraging outcome of Paris was an initiative called ‘Mission Innovation’. It was launched by President Obama and the Indian Prime Minister Modi, and endorsed by the G7 nations, plus India, China and 11 other nations. It’s hoped they’ll pledge to double their publicly funded R&D into clean energy by 2020 and to coordinate efforts. There’s been a parallel pledge by Bill Gates and other private philanthropists.

This target is a modest one. Presently, only 2 per cent of publicly funded R&D is devoted to these challenges. Why shouldn’t the percentage be comparable to spending on medical or defence research?

The faster these ‘clean’ technologies advance, the sooner will their prices fall so they become affordable to developing countries – where more generating capacity will be needed, where the health of the poorest billions is jeopardised by smokey stoves burning wood or dung, and where there would otherwise be pressure to build coal-fired power stations.

It would be hard to think of a more inspiring challenge for young engineers than devising clean energy systems for the world.

All renewables have their niches – wind, tides, waves and hydro here in the UK, for instance. But an attractive scenario for Europe might be large-scale solar energy, coupled with a transcontinental DC smart grid network (north-south to transmit power from Spain or even Morocco to the less sunny north, and east-west to smooth over peak demand in different time-zones) with efficient storage as well.

Of course the unique difficulty of motivating CO2 reductions is that the impact of any action not only lies decades ahead, but is globally diffused. In contrast, for most politicians the immediate trumps the long term; the local trumps the global. So climate issues, which gained prominence during the Paris conference, will slip down the agenda again unless there’s continuing public concern.

For more information on Lord Rees' IPR lecture, please see our writeup here.

 

 

Being Female, NEET and Economically Inactive – what does that mean?

📥  Business and the labour market, Welfare and social security

Professor Sue Maguire is Honorary Professor at the IPR

 

‘Any evidence that we have on the NEET group is dated. We have pockets to support different types of policy development, but no way do we have good evidence…’

- (Policymaker)[1]

 

This admission by a policymaker about the dearth of evidence on which to base policy targeting those young people, currently numbering 857,000[2] according to official UK figures, who are not in education, employment or training (NEET), highlights an area crying out for substantial research and investigation.

girlsmall

 

A contested term

We have become very familiar with the term ‘NEET’ and its widespread application to quantify levels of social and economic exclusion among young people. Leaving aside the suspicion that NEET became the preferred term partly because of our love of acronyms and partly because it was less emotionally charged than Status Zero, which was the original classification in the UK, it is important to remember that it was originally applied to 16- and 17-year-olds who could no longer be classified as ‘unemployed’ due to legislative changes. Since then, NEET has become the term used not just in the UK, but internationally, to refer to a much wider age cohort of young people (16-24 in the UK).

But how useful is the NEET label in identifying the true volume of people in this category? Significant numbers of those aged 16-18 are not identified as NEET because their destinations are not recorded, and large numbers of those over 18 who are defined as NEET fail to register for welfare or other types of support in the UK. Thus, the numbers in the claimant count are much smaller than those in the overall NEET population, leading to the conclusion that a large number of young people are unsupported by statutory services – why is this? Given the disconnect between the official NEET statistics and policy intervention to track, manage and support the estimated NEET population, perhaps it is time to re-think our application of the term NEET and, crucially, our policy responses to support young people who fall into this overall definition.

Gender differences

Irrespective of the expanded scope of the NEET group, it is apparent that gender differences within the cohort have been neglected in the literature, wider debates and, crucially, policy formation about the NEET group. Data from the January-March quarter of the 2016 Labour Force Survey and the National Online Manpower Information System highlighted differences between NEET females and NEET males. Despite its original purpose, the NEET category now includes young people who are actively seeking work, i.e. the economically active (EA) NEET group, as well as those who are economically inactive (EI), primarily because they have caring and/or domestic responsibilities or are unable to participate in education, employment or training due to long-term ill health. NEET young women outnumbered NEET young men (432,000 to 376,000); 66% of the young women were EI, compared to 43% of the young men; and young people who were NEET and EA were mostly young men (59%).

Differences are also apparent in the types of benefits received by males and females in the NEET and EI group, with young women claiming Income Support (IS) in larger numbers, as a result of caring responsibilities. Most young men in the NEET EI group claim Employment and Support Allowance (ESA) due to illness or disability, with the primary cause being psychological problems. Significant numbers of young women claim ESA for the same reason.

To date, relatively little attention has been paid to young women who are NEET and EI, with a passive acceptance that their ‘caring’ responsibilities sideline them from meaningful support or policy intervention. Reasons for the differences described above need to be explored in greater depth in order to provide evidence on which effective policy initiatives can be introduced. A study currently being undertaken by myself and Young Women’s Trust, with funding from the Barrow Cadbury Trust, is seeking to address some of the gaps and shortcomings in our understanding of what it means to be NEET and EI, and what impact this categorisation has on the lives of young women.

The stakeholders

During the first year of this two-year project, interviews were carried out with ten key experts, including policymakers and academics. In addition, case studies were undertaken in five localities, with local stakeholders who were involved in devising and delivering employment interventions in each area being interviewed. These stakeholders typically included local authorities, Jobcentres, Local Enterprise Partnerships, education and training providers, and voluntary and community sector organisations.

Among these respondents, there was concern about the general acceptance that all young women who are NEET and EI would remain so for long periods of time because of their early motherhood, caring responsibilities or ill-health. Rather, it was felt that this issue required ‘unpacking’ in order to gain more understanding about their needs and requirements. An important finding was the relationship between the type of welfare benefit and intervention that young people receive and their classification as either NEET EI or NEET EA. Young women, who are much more likely than young men to be NEET and EI, typically remain on welfare support for much longer periods than those who are EA, and are also far less likely to receive any form of positive support or intervention. Conversely, the support offered to young people who are actively seeking work and claiming JSA was fiercely criticised for its high levels of sanctioning, unrealistic target-setting and emphasis on removing claimants from the register at the earliest opportunity. It is only the unemployment rate that attracts national media attention and which is scrutinised by national government and by authorities such as the International Labour Office. This difference between the two groups is also reflected in the proportions of their respective claimant counts, with much lower numbers of young people (especially young women) being present in the NEET and EA category.

A preferable approach would involve claimants being provided with targeted and tailored support, instead of being subjected to demanding targets and having the threat of sanctions hanging over them. Concerns were also expressed about the impact of being NEET and EI on young women who were relatively isolated within their households and communities, notably their propensity to suffer from low self-confidence, low self-esteem and, for some, mental health issues. Their detachment from external and independent support and advice could have long-lasting effects on their health and likelihood of future employment. It was reported to be very difficult for local agencies to identify and engage with young women in the NEET and EI group.

For young mothers who are NEET and EI, major barriers to engaging in education, employment or training were deemed to be: affordable childcare; a reluctance to leave their children; access to transport; and appropriate employment and training opportunities.

The issue of the large numbers of young people who do not appear in the system and are effectively ‘unknown’, as mentioned above, was prominent in respondents’ concerns. This was attributed, in part, to cuts to local services which have placed constraints on local authorities’ ability to fulfil the requirement for mapping and tracking young people in Years 12-14. Official statistics show that, in many localities, the ‘unknown’ rates are higher than the NEET rates. This has been exacerbated by the decision to limit any tracking responsibility until the individual’s 18th birthday – when the post-18 group is, perhaps, more in need of monitoring and when the NEET rate significantly rises. Certainly, the absence of any agency or organisation with statutory responsibility for measuring the number of, or addressing the needs of, young people over the age of 18/19 who fail to apply for welfare support was perceived to be of immediate concern.

It was suggested that the reasons for young people’s detachment, leading to their destinations and circumstances being ‘unknown’, included: an unwillingness to cooperate with benefit regulations; fear of statutory bodies; family support which allows them to avoid registration for benefits; the stigma of benefit receipt; and informal or casual working arrangements. Whatever the reasons, this ‘hidden’ NEET population remains largely unquantifiable in many localities and out of the remit of statutory services. Hence, little is known about young people who fall into this category in terms of their characteristics, what has caused their detachment, and any barriers they may face.

The young women, in their own words

The in-depth interviews with ten young women who were NEET and EI provided illuminating insights into their lives and experiences, particularly in relation to their school and post-school experiences, domestic circumstances, money management, and their hopes and aspirations. In this admittedly small sample, those in receipt of IS had caring responsibilities (for their children), while those on ESA suffered from anxiety and depression – one respondent refused to claim welfare support because of her previous negative experiences of dealing with the Job Centre.

Half of the young women were living in their parental home. However, most of them continued to rely on a parent and/or family members for emotional, practical and financial advice and support, irrespective of their circumstances. This included practical help with childcare, food, clothing and personal care costs and assisting with application forms for housing or benefit receipt. Those who lived at their parents’ home contributed minimal amounts to the household budget and, in some cases, their dependence on their family resulted in a reluctance to move out of the family home, because of the perceived risks this posed to their established support networks. A lack of friendship networks, few hobbies and interests, and limited social activities were the norm. Thus, family networks appeared to both insulate and isolate young women from the outside world.

Strikingly, in the face of scarce resources, these young women were adept at managing their finances. This management took different forms, from prioritising expenditure on food, rent, fuel, children’s clothing and toiletries while eking out fortnightly benefit payments, to using loans to buy furniture and other goods from charity shops.

Conclusions

Questions must be raised about our ability to implement effective and appropriate (meaningful) policy interventions when there is clearly a dearth of knowledge and understanding about the NEET group – both in terms of its expanded age cohort, and its inclusion of both the EI and EA groups, which have been shown to have very different needs. Moreover, we have what appears to be a growing army of young people who, under the age of 18, have ‘unknown’ destinations – or who, over the age of 18, may have the classification of being NEET within the statistics, but fail to engage with the welfare system. This leads us to the conclusion that the extension of the umbrella term ‘NEET’ to cover a much wider age cohort has failed to be accompanied by an expansion in understanding about the characteristics and needs of young people who fall into this category; perhaps just as importantly, the wider implications for inclusion and policy responses has not been acknowledged.

Assumptions about young women who are NEET, have caring responsibilities and are likely to remain EI need to be challenged. Is the welfare system and its categorisation of individuals, based on criteria for benefit entitlement, labelling them for the convenience of the system, rather than seeking to design initiatives which engage with them and facilitate their easier access to education, employment and training? Also, while many young women may wish to spend time caring for their children or relatives and may not wish to feel under pressure to (re)join the labour market, this needs to be accompanied with access to appropriate support and intervention when it is required. As it stands, young parents are ‘left alone’ within the benefit system until their youngest child reaches the age of five and are then immediately expected to find work or training if they wish to claim benefits. They need sustained transitional support.

It was evident from the case studies that, at a local level, agencies providing support for the NEET group had established strong and effective partnership working to identify the young people’s needs and to develop local initiatives. More problematic was the short-term nature of funding for these initiatives, and, thus, an absence of long-term strategy or planning. Factors which were perceived to pose a threat to future support for excluded and marginalised young people were: a lack of programmes funded by central government; initiatives being reliant on short-term funding and with a variety of outcome measures; the impending removal of EU structural funds; and a growing reliance on charitable and philanthropic funding to support NEET intervention projects.

The term NEET and the inclusion of the terms EI and EA within it are in urgent need of reappraisal. Perhaps it is time to go back to the drawing board and to question whether ‘NEET’ continues to qualify and quantify the scale of social and economic exclusion among young people in Britain and, if it does, then what policy interventions can be delivered to address the whole population rather than selected sub-groups within it. Finally, questions must be asked about the appropriateness of using access to welfare support facilitated through registration with DWP as an adequate and effective mechanism to engage with young people who are NEET. The existing evidence would suggest that it is failing to meet the needs of many young people, particularly young women.

 

You can read the Summary Report and Full First Year Report on the Young Women's Trust website here

[1]Maguire, S and McKay, E. (2016) Young, Female and Forgotten? London: Young Women’s Trust (p.25)

[2] ONS (2016) Young People Not In Education, Employment or Training (NEET), UK: November 2016. Statistical Bulletin.