IPR Blog

Expert analysis, debates and comments on topical policy-relevant issues

Do Warnings Work?

📥  health, Risk

Professor Bill Durodié is Professor and Chair of International Relations in the University of Bath's Department of Politics, Languages and International Studies. The narrative presented here was supported by an award from the Gerda Henkel Stiftung under their Special Programme 'Security, Society and the State'.

It is commonly asserted that the first duty of government is to protect its citizens. But one of the challenges confronting authorities that produce advice and issue alerts is the extent to which precautionary messages have become an integral part of our cultural landscape in recent times. From public health to counter-terrorism, climate change to child safety, a profusion of agencies – both official and unofficial – are constantly seeking to raise our awareness and modify our behaviour whether we know it or not. This may be done with the best of intentions – but we should be mindful of where that may lead.

warning

 

Issuing a warning presumes negative outcomes if it is not heeded. Accordingly, it transfers a degree of responsibility to recipients who may not have sought such counsel – or been consulted. Indeed, these may come to interpret it as a mechanism to deflect blame and accountability. And, aside from the intended response – presumed appropriate by those imparting the information – others may dispute the evidence presented, its interpretation, and the intentions behind these, as evidenced by acts of complacency and defiance.

Such negative consequences – deemed maladaptive by politicians and officials who have swallowed the psychologised lexicon of our times – reveal an important truth in our supposedly post-truth societies, and that is that people are not driven by evidence alone. Addressing their core values and beliefs is more critical to motivating change and achieving influence. This requires respecting their moral independence and recognising the importance of ideas. Process and data-driven, protectionist paternalism, on the other hand, reflects a low view of human beings, which is readily self-defeating.

Altering our choice architecture, as some describe it, encourages self-fulfilling prophecies that interfere with our autonomy and undermine consent in the name of improving welfare or keeping us safe. And while there is a wealth of literature regarding such interventions and their purported effectiveness, most relates to single cases or relies largely on precedent – such as preparing for terror attacks or controlling tobacco use – rather than examining the implicit assumptions and the wider, societal consequences of such approaches.

Responses like overreaction, habituation and fatigue derive not so much from specific instances of warning as from the cumulative impact of a cultural proclivity to issue such guidance. This latter, in its turn, speaks to the growing disconnect between those providing advice – even if at arm’s length from the state (thereby inducing a limited sense of civic engagement) – and those charged with living by it. To a self-consciously isolated political class, proffering instructions and regulating behaviour appears to offer direction and legitimacy in an age bereft of their having any broader social vision.

Yet, reflecting on the UK Foreign and Commonwealth Office provision of travel advisories before and after the 2002 Bali bombings, the distinguished Professor of War Studies Lawrence Freedman noted how such guidance ‘is bound to be incomplete and uncertain’. ‘[I]t is unclear’, he continued, ‘what can be achieved through general exhortations’. Far more important to averting accusations of complacency or alarmism on the part of government – ‘the sins of omission and commission’, as he put it – is the need to impart and share in a sense of strategic framing with the public. We might call this politics.

In his 2002 speech at the Lord Mayor’s Banquet, the then British Prime Minister, Tony Blair, advised how intelligence on possible security threats crossed his desk ‘all the time’. Only some was reliable. The remainder included misinformation and gossip. He sought to distinguish between specific intelligence, suggestive intelligence and acting ‘on the basis of a general warning’, which would effectively ‘be doing [the terrorists’] job for them’.

Blair explained how there was a balance to be struck and a judgement to be made ‘day by day, week by week’ in order not to shut down society. He noted that keeping citizens alert, vigilant and cooperative would test ‘not just our ability to fight, but … our belief in our own way of life’. In doing so, he implicitly pointed to the need for wider critical engagement and our having a sense of collective purpose beyond the immediacy of any threat.

But nudging people to act without their conscious support and endlessly raising awareness about all manner of presumed risks and adverse behaviours precludes both of these essential elements. Indeed, when some suggest that the general population are inherently ignorant, not qualified or too immature, or that they cannot be relied on to handle complex evidence to determine matters for their own good (an argument as old as Plato), they display a considerable complacency of their own, as well as an unwillingness to engage and inability to inspire a broader constituency to affect change.

People can only become more knowledgeable, mature and reliable when they participate actively in matters of consequence. There can be no shared sense of social purpose if citizens are not treated as adults. Otherwise, official pronouncements come across as the disengaged exhortations of remote authorities, and warnings – as with the increasingly graphic images on cigarette packets – simply become the background noise of the self-righteous.

The refusal to be inoculated against H1N1 pandemic influenza once a vaccine was developed for it in 2009, for example, did not stem from social media propagation of ‘rumours’ and ‘speculation’ on ‘volatile’ public opinion as some supposed. Rather, and more damagingly still, it was a conscious rejection led by healthcare workers themselves, informed by their own experience of the virus, and inured to the declarations of senior officials who announced that ‘it really is all of humanity that is under threat’, as well as those who responded uncritically in accordance, developing models where none applied.

The language of warnings has shifted over the years from articulating threats, which could promote individual responsibility, to simply eliciting desired behaviours. Indeed, the proliferation of biological metaphors – ideas go viral, individuals are vulnerable, activities are addictive – reflects the demise of any wider moral or political outlook. But encouraging a responsive sensitivity and tacit acceptance by evoking negative emotions can readily backfire. It is unlikely to generate a critical culture or social solidarity.

So – do warnings work? It depends. Facts alone do not motivate many. It is how they are interpreted that matters. And the framing of these today often dismisses our agency and promotes a powerful sense of determinism. The Nobel Prize winning economist Daniel Kahnemann noted how ‘[t]here are domains in which expertise is not possible’. Decision-making – like democracy – is a moral choice in which we are all equals.

Not everything of value has a value and few things that are worthy have a worth. That is why the sole pursuit of evidence and data by those in authority, with a view to inducing acceptance and behaviour change, fails to inspire those who seek more to life than the mere protection of the state. Where are the ideas and ideals capable of leading us beyond a narrow, existential concern for our own well-being and towards a broader appreciation of the potential of the collective human project?

This piece also appeared on The Policy Space.

 

The Hard Brexit road to Indyref2

  ,

📥  Brexit, EU membership

Of all the political parties in the United Kingdom, the Scottish National Party is the most consistently strategic. That it lost a referendum on Scottish independence in 2014 and barely three years later is in a position to call another one is testament to its strategic acumen. It turns heated internal arguments into clear external purpose, executed with discipline. Yesterday, the Prime Minister accused it of treating politics as a game. She could hardly have chosen a less appropriate attack.

Thistles

 

Calling a second referendum is high risk. If it is lost, as Quebecois nationalists know, the chances of striking it lucky third time are remote. The economic arguments against independence remain formidable, and would be further complicated, not resolved, by a parting of the ways between Scotland and the rest of the United Kingdom over membership of the European Union.

Two factors explain Nicola Sturgeon’s decision: the intransigence of Conservative-Unionism and the weakness of the Labour Party. Intransigence is in part an artifact of the Prime Minister’s governing style, which combines “personal animus and political diligence”, as David Runciman has written. She sticks to a position doggedly and keeps things close to her in No10. She is capable of ruthless revenge, to the point of petulance, as Michael Heseltine recently discovered. It is a statecraft that has served her well until now. It is not one that is suited to sharing power in a process of negotiation and compromise across a fractured union.

Her choice of the hard route to Brexit has also narrowed her scope for flexibility. Taking Britain out of the EU single market and customs union is the proximate cause of Scotland’s discontent. It is also the source of mounting opposition to Brexit in Northern Ireland. There would be no possibility of a hard border in Ireland if the government had not chosen a Hard Brexit. And it is primarily because the government wants to negotiate a comprehensive free trade agreement with the EU, and to strike its own trade deals with the rest of the world, that is resisting the devolution to Scotland of the powers over agriculture and fisheries that will be repatriated from Brussels. (What’s more, if Britain leaves the EU without a deal, and unilaterally removes all tariffs in order to smooth its path to the WTO, the impact would be disproportionately felt by Scotland’s manufacturers, farmers, and distilleries). The logic of Hard Brexit is Conservative-Unionist, when to meet the aspirations of its constituent nations, and to hold itself together, Britain needs a flexible, federalist approach.

History is in danger of repeating itself. The last time the United Kingdom was challenged by the aspirations for greater self-determination of a significant proportion of one its nations was during the long struggle for Irish Home Rule. Conservative-Unionists met that challenge by suppression, not accommodation. It didn’t end well.

The second factor is the decline of the Labour Party. It has been widely remarked that the SNP will use Labour’s electoral weaknesses to present the referendum as a choice between independence and indefinite Conservative government at Westminster. But a near-term calculation is at work here too: Labour’s decline means that the referendum campaign itself will be fought between the SNP and the Conservatives. Labour will not carry the banner of unionism – the very term is now toxic for the party in Scotland – and while its UK leader cannot even stick to an agreed script, it will be incapable of marshalling anti-nationalist forces, as it once did. The referendum will become the straight fight with the Conservatives that the SNP has always wanted.

Labour’s vacillation on Europe means that it is currently largely voiceless in the national debate on Brexit. It is shedding votes to the Liberal Democrats as a consequence. It fears a further loss of support to UKIP and the Conservatives if it backs membership of the single market and customs union in the Brexit negotiations. But the prospect of the breakup of the UK, the unstitching of the Northern Irish settlement, and economic decline in its heartlands should give it cause to consider the national interest, not just the party interest. Labour could make itself politically relevant to the future of the UK, and to the Brexit negotiations, if it changed tack and support continued membership of the EU single market, as well as a new (quasi) federal constitutional settlement for the UK (perhaps even creating an English Labour Party in the process). Perhaps this is unthinkable, even for a desperate party. But without such a change, there is no prospect of a parliamentary bloc that unites pro-European Conservatives with Labour, the Liberal Democrats, the SNP and other parties in meaningful opposition to the government. And without that, there is every prospect of a Hard Brexit and the breakup of the United Kingdom.

Addressing the evidence deficit: how experimentation and microsimulation can inform the basic income debate

📥  universal income

Dr Luke Martinelli is Research Associate on the IPR's universal basic income project. This post draws in part on material presented in his recent IPR working paper The Fiscal and Distributional Implications of Alternative Universal Basic Income Schemes in the UK.

New and forthcoming IPR working papers – as well as experimental data from policy trials currently and imminently taking place across the world – address some of the core empirical issues around the feasibility of universal basic income (UBI), and how it can be designed most effectively. However, no amount of evidence can provide an escape from difficult political choices in the face of unavoidable conflicts between policy goals – or eliminate the need for advocates to address longstanding normative objections to UBI.

UBIdg

 

Despite numerous and well-publicised desirable qualities which appeal across the political spectrum, the case in favour of universal basic income (UBI) is far from decisive. There is much we still don’t know about UBI that must be addressed in order for the policy to move beyond superficial, ‘cheap’ support and to gain serious traction in the political sphere.

As Malcolm Torry argued in a previous post, as policymakers have begun to pay increasing attention to grassroots enthusiasm for UBI and acknowledge its theoretical strengths (as well as the weaknesses of existing social security provisions), there has been a shift from concerns about whether UBI is desirable per se, to questions of political feasibility and the best way to design and implement the policy. According to Torry, this progression is “evidence for the increasingly serious nature of the current debate”.

What policymakers need to know

Objections to UBI are numerous, but it is perhaps possible to identify three core barriers to feasibility as follows:

  • it would dis-incentivise work and encourage idleness;
  • it would be too costly; and
  • it would be inadequate to meet the diverse and complex needs of the poor.

As a result of these attributes – which are also, of course. inherently undesirable – sceptical observers have claimed that UBI would never generate the political support required for implementation. If labour supply was expected to contract significantly, the tax base would collapse and the policy would be seen as unsustainable. If the cost was seen as too high, voters would not consent to the requisite tax rises. And if disadvantaged people were to become poorer as a result, the policy would be seen as unacceptably unjust.

In response, advocates have claimed on the contrary that UBI would remove poverty and unemployment traps, would only require minor tax increases, and could easily accommodate provisions for those with higher needs (for example due to disability or housing costs).

Of course, the extent to which these attributes pertain – the labour market effects, fiscal costs, and distributional implications – depends wholly upon the specifics of the UBI scheme in question. As I discussed in a previous post, while basic income’s core attributes are a matter of definition, many are variable: most crucially, the level of payments, and the extent to which the UBI is conceived as replacement for or complement to existing benefits.

UBI is often discussed as a monolithic policy, which obscures clear understanding about likely effects; opponents debate at cross purposes, discussing completely different schemes and using them to support their favoured stance. Common ground – and an end to the impasse on these core issues – can only be achieved as a result of greater clarity about the diversity of proposals that exist, and the specific effects of varying core design features.

Policymakers are ultimately concerned about the practicalities of implementation: they need to know which schemes are feasible in the short- to medium-term, in the prevailing socio-political climate. They simply cannot afford to entertain utopian visions of a future in which no-one is compelled to work, or in which people are happy to accept tax rates an order of magnitude greater than those prevailing today. If they are to support UBI, they need to know whether the aforementioned barriers to feasibility can be bypassed – and how.

Fortunately, empirical evidence can help to assess the contradictory claims of advocates and opponents more effectively. There are two main forms of evidence: ex-ante (‘before the event’) models/simulations and ex-post (‘after the fact’) impact evaluations.

Ex-post evidence

Trials and policy experiments are important for several reasons. Most obviously, they give us important information about the effects of implementing basic income that cannot be gleaned from theory (for example and probably most crucially, on disputed labour market behavioural effects, but also on other non-financial outcomes such as health and wellbeing). Trials are also invaluable in uncovering unexpected outcomes and implementation issues, and in fine-tuning the detail of the policy in advance of scaling-up.

Perhaps equally importantly, they serve to foment conversation and political debate which simply would not occur in the absence of the trial. As Jurgen De Wispelaere observed in an IPR seminar, experiments

have important political demonstration effects… advancing the policy agenda by raising awareness amongst key stakeholders/general public, keeping open a window of opportunity, building a broad political coalition “en route”, and overcoming objections by demonstrating impacts

Basic income advocates have long drawn on trials that took place in the US and Canada between 1968 and 1980 (Widerquist, 2005). Observers note that despite some contraction in labour supply, these were far from the employment exodus predicted by UBI’s harshest critics; Forget (2011) has documented the health and wellbeing benefits of the Canadian trials.

There are also two prominent historical examples of universal payment programmes (not trials) in diverse country contexts: the Alaskan Permanent Fund Dividend and Iran’s reform of consumption subsidies (Widerquist and Howard, 2012; De Wispelaere, 2016). These examples demonstrate the administrative and political feasibility of UBI-type schemes. Experiments with basic income have recently taken place in Namibia (Haarmann et al., 2009) and India (Davala et al., 2015) with strong positive findings. Standing (2008) argues that the lessons from the implementation of various forms of (non-UBI) cash transfer in developing countries also provide compelling evidence in favour of UBI, in that negative labour market responses have been minimal and it is likely that the poor use income transfers to invest in productive activities – not for frivolous consumption, as is often portrayed.

Since 1 January this year, 2,000 Finnish individuals have been receiving an unconditional payment of €560 per month as part of a two-year government pilot. A number of city authorities in the Netherlands have been granted permission – under central government legislation allowing policy experimentation – to conduct trials eliminating the imposition of behavioural (labour market) conditions for benefit claimants. Both trials look sure to provide valuable information on how recipients respond to unconditional payments.

There is now a spate of proposals for further pilot schemes to be implemented in the coming years. Some of the interest has come from devolved regional and city-level authorities (Ontario in Canada; Fife and Glasgow in Scotland); some has come from the development assistance community (as in the case of the Give Directly pilot in Kenya); some has come from national administrations (India); and some has come from the private (tech) sector (Y-Combinator in California).

Limits to experimental evidence

However, there are limits to the insights that can be gained from the aforementioned experiments, including the ongoing Finnish and Dutch case studies.

In many cases the experiments fall far short of the evidential requirements of randomised control trials – the so-called ‘gold standard’ of policy evaluation. This is certainly the case in the example of the Namibian Basic Income Grant scheme, which is heavily criticised by Osterkamp (2013) as employing biased outcome indicators and lacking a control group – but methodological problems abound more broadly. For one thing, trials are necessarily of limited duration, and may not easily pick up longer-term effects of policy. In addition, the behavioural response to a trial of limited duration may be very different to the response to a policy that provides income security for a lifetime.

Further, policy outcomes depend heavily on the specific contexts in which they are implemented, limiting the applicability (or ‘external validity’) of trials to other countries, time periods, or groups of recipients. Experiments carried out in developing countries provide limited insight into the potential for basic income to be inserted into comprehensive welfare states such as the UK’s. The US and Canadian experiments are now several decades old. In the case of the experiments in Finland and the Netherlands, researchers are limited in applying the UBI ‘treatment’ on existing benefit claimants. It is not clear how the observed effects would vary to those for other groups in the context of a truly universal payment.

Most of the trials only fairly approximately resemble UBI ‘proper’, or the types of UBI upon which policy interest is most focused today in high-income countries (which aim at least to partially replace existing welfare state policies). The Alaskan and Iranian programmes differ in several crucial respects from the proposals of most basic income advocates, namely in the low and fluctuating value of payments and in their funding mechanisms; being paid from natural resource revenues, they are arguably significantly less likely to face political opposition (De Wispelaere, 2016). The US and Canadian experiments differ in that they involved a tax rebate mechanism based on reported income rather than an upfront payment (these are more usually termed negative income tax schemes rather than UBI). In the Dutch experiments, the implementing authorities were restricted by central government for political reasons, resulting in watered-down research designs. Specifically, claimants in the treatment group can only keep 50% of additional earnings, up to a maximum of €199 (ensuring their total combined income remains less than someone would earn working full-time at minimum wage), completely contradicting the principles of UBI. Even in the Finnish experiment – the first trial that could reasonably be described as a ‘real’ UBI within a high-income, mature welfare state – researchers were unable, for practical and administrative reasons, to implement tax changes alongside the implementation of the UBI (KELA, 2016). Considering that changes to the tax system are almost always a core element of any realistic basic income proposal, this is a significant weakness.

Another crucial limitation is that it seems likely that the effects of upscaling a policy to the national level would result in markedly different effects – with different implementing authorities, and significant macroeconomic effects not captured in the trial. As Widequist (2005) observes, being trials of limited scale, the US and Canadian experiments did not give rise to demand response “and therefore could not estimate the market response” of the policy (p. 50). Moving from an experiment run by a small, dedicated team of researchers to a nation-wide policy administered by a sprawling and perhaps badly-funded bureaucracy is likely to give rise to unforeseen implementation problems. In other words, the effects of a trial may be very different from those of the same policy rolled out across the board.

Finally, even if we are able to observe – reliably – the impacts of a policy, find that the effects are positive, and generalise the findings to another context, experiments such as these do not offer any way of weighing up beneficial impacts (relating to improved income security) against UBI’s fiscal costs (and against the costs and benefits of alternative policies). It is hardly surprising that giving people money would have a number of positive impacts; the question is whether UBI is the best use of the funds.

Ex-ante evidence

Fortunately, these are questions to which ex-ante microsimulation methods can readily be applied.  Microsimulation is a common approach to evaluating the effects of tax and benefit reforms with respect to fiscal implications, distributional effects, and (less commonly) impacts on static work incentives. Advances in computing power combined with the availability of large, representative income surveys makes it possible to compare outcomes of the prevailing ‘base’ policy environment with other hypothetical policy systems. This means that we have much greater capacity to assess and compare large numbers of different permutations of UBI.

Because it models the effects of policy reforms over a representative sample, microsimulation enables researchers to draw an accurate picture of overall impacts on the income distribution at the national level. However, a major shortcoming of this type of analysis is that it assumes no behavioural change (e.g. labour market response). This seems unrealistic in the context of such a wide-ranging reform as the implementation of a universal basic income, especially one paid at a generous level. For these reasons, microsimulation evidence should be complemented with ex-post analysis of observed behavioural responses.

A number of microsimulation studies have already modelled the effects of specific UBI schemes in the UK (e.g. Torry, 2016; Reed and Lansley, 2016). These studies have been instrumental in identifying ‘feasible’ ways of implementing basic income so as to minimise household losses and keep costs within the boundaries of ‘politically acceptable’ tax increases. The downside is that such schemes require the retention of the existing structure of social security alongside the UBI.

With our IPR Working Papers, we add to this burgeoning literature in important ways, with the specific intention of objectively informing policy audiences about the difficult decisions involved in designing UBI schemes. In particular, in The Fiscal and Distributional Implications of Alternative UBI Schemes in the UK, I systematically compare UBI schemes with a range of payment levels and compensatory tax/benefit changes. Unlike previous studies, I start from the presumption that at least part of the appetite for basic income arises from its promise to sweep away the mainstay of complex, intrusive and stigmatising means-tested benefits. In another (forthcoming) paper, Exploring the Distributional and Work Incentive Effects of Plausible UBI Schemes, I look at the distribution of winners and losers in more detail and introduce important indicators of static work incentives.

Combined, these microsimulation studies provide a great deal of important information required by policymakers in assessing competing UBI proposals, particularly bearing in mind the need to restrict net costs and the motivation to reduce poverty and unemployment traps that arise in means-tested systems. ‘Transitional’ forms of UBI – for example, one replacing the personal income tax allowance with a payment of equivalent value, and others covering subsets of the population – are suggestive of possible pathways to more generous and comprehensive forms of UBI. In consideration of the likely political imperatives, we model a number of (broadly) revenue-neutral schemes as well.

The inescapable conclusion of my research is that there are no easy answers to the questions facing UBI advocates; no ‘optimal’ basic income scheme. Rather, policymakers are faced with a series of trade-offs between competing goals of a) meeting need, b) controlling cost and c) retaining administrative simplicity and enhancing work incentives (through the elimination of means-testing). The analysis thus draws our attention to the difficulty involved in designing basic income schemes that satisfactorily compensate existing beneficiaries of the system while retaining the principle of universalism.

Complementary forms of evidence

This blog post has summarised the potential of two forms of evidence to inform debate and bring the current impasse around the feasibility of UBI to a close. I hope to have shown that ex-ante and ex-post studies are complementary; ex-ante simulations can say much about the fiscal and distributional effects of basic income, but nothing about behavioural responses or implementation challenges – and ex-post evaluations can provide insights into these outcomes, but have a number of shortcomings that limit their applicability to wider contexts and their utility in assessing different policy design features against alternatives.

While much public attention has been devoted to the upcoming trials, therefore – and while such trials certainly have their place – they cannot give us the full picture on UBI, particularly in relation to the fiscal feasibility of schemes. This is the value of the microsimulation approach I've presented in the IPR’s work; the evidence generated, I hope, will tell policymakers the other half of the story.

The full working paper The Fiscal and Distributional Implications of Alternative Universal Basic Income Schemes in the UK can be read and downloaded here.

 

References

Davala, S., Jhabvala, R., Standing, G. and Mehta, S. K. (2015). Basic income: A transformative policy for India. London: Bloomsbury Publishing.

De Wispelaere, J. (2016). “Basic Income in Our Time: Improving Political Prospects Through Policy Learning?” Journal of Social Policy, 45(4): 617-634.

Forget, E. (2011). “The Town with No Poverty: The Health Effects of a Canadian Guaranteed Annual Income Field Experiment.” Canadian Public Policy / Analyse de Politiques, 37(3): 283-305.

Haarmann, C., Haarmann, D., Jauch, H., Shindondola-Mote, H., Nattrass, N., van Niekerk, I. and Samson, M. (2009). Making the difference! – The basic income grant in Namibia. Assessment Report. Windhoek: BIG Coalition.

KELA (2016). From Idea to Experiment: Report on universal basic income experiment in Finland. KELA Working Paper 106/2016. Helsinki: KELA.

Osterkamp, R. “The Basic Income Grant Pilot Project in Namibia: A Critical Assessment.” Basic Income Studies, 8(1): 71-91.

Standing, G. (2008). “How Cash Transfers Promote the Case for Basic Income.” Basic Income Studies, 3(1).

Widerquist, K. (2005). “A failure to communicate: what (if anything) can we learn from the negative income tax experiments?” The Journal of Socio-Economics, 34(2005): 49-81.

Widerquist, K. and Howard, M. (eds.) (2012). Alaska’s Permanent Fund Dividend: Examining Its Suitability as a Model. New York: Palgrave.

 

Spring Budget 2017: T-levels, apprenticeships and industrial strategy

📥  Economy, education, future, labour market, policymaking

Dr Felicia Fai is Senior Lecturer in Business Economics and Director of Widening Participation and Outreach at the University of Bath's School of Management

In many ways, there were no real surprises in the Spring Budget, with many of the initiatives having been announced in the Autumn Statement, which focussed more specifically on science and industry. The point of greatest novelty (although still not a complete surprise) was the focus on the longer-term future pipeline of talent in the workforce and the need to raise productivity in the UK. There is some attempt on the government’s part to more comprehensively approach the issue of the future workforce, and to provide an alternative but equally prestigious and valuable route into education and careers to the standard ‘A-level + Bachelor’s degree’ route. The government will create the ‘T-level’ for 16-19 year-olds, in which formal training hours will be increased by 50% over existing options and include a minimum 3-month placement in industry to ensure school leavers are ‘workplace ready’. This is in addition to other vocational initiatives that the previous parliament established, such as the creation of 1,000 degree apprenticeships, plus implementation of the new apprenticeship levy that will commence in April 2017. Beyond the 16-19 T-levels, loans are to be made available on a similar basis to existing support for university degrees to study at the new institutes and technical colleges the government intends to create. Further, at the highest educational levels, there is £300m funding support for 1,000 PhDs across all STEM areas.

vocation

 

The announcement of T-levels and a commitment to apprenticeships is welcome. The UK has long suffered from having too few clear and well-recognised (by both applicants and employers) alternative routes into skilled and high-paid work except for university degrees – and it is clear to me, as a university lecturer, that a degree structure and the forms of learning and knowledge testing used as standard forms of engagement in degree-level programmes do not suit all learners; nor is it always the most appropriate way to develop skills. As a senior admissions tutor for undergraduate programmes, I consider applications from mature applicants in their early- to mid-20s who state that, whilst they have progressed in their careers since leaving school, they now realise their ability to advance in their careers further is blocked by not having a formally recognised degree. I do wonder whether the decision to attend HE is the right one for them.

Sometimes, people are not ready emotionally or intellectually to deal with university-level education at 18, so choose not to apply for entry straight after school. Coming in later would seem appropriate, and we welcome them as they are more likely to succeed now than they would have been had they tried to come earlier. Others may have avoided university because they recognised early on that they did not want to, or were not able to, think in the particular ways in which we require students to think in order to achieve good marks in academic institutions driven by a strong research culture. For example, a recurring weakness in exam performance is the failure of students to answer the specifics of the question set – as opposed to displaying the general breadth of their knowledge – and an ability to make connections between the content they experienced on one subject and the content in the subject the specific exam is testing. The latter is looked for more generally in coursework or dissertations, but is not always appropriate in examination settings. There have been times in my career when I have seen the promise of an individual in the workplace setting and known that they will be a truly amazing employee, manager or future leader precisely because of their ability to see the ‘bigger picture’; yet, in the classroom and in written coursework and exams, they do not reveal the academic skills and precision that would get them the marks which signal their potential. Being ‘book smart’ is different to ‘street smart’, but our current system of HE is highly skewed towards the former.

The T-levels will offer a more streamlined pathway, with focused routes into 15 different areas, and have the potential to offer a different and equally valued and prestigious route into a career; but will their potential be realised? Leaving specific content aside, one of the key problems is the low profile, poor advertising and opacity associated with alternative routes into a career. The most well-established path is GCSEs, A-levels then university degrees. Chancellor Philip Hammond noted in his speech that 13,000 vocational and technical qualifications exist. How many of these are well-recognised and valued by HE institutions and employers? How much advice can cash-strapped schools and colleges provide on these qualifications to individuals looking for a career path that does not involve attending university for a bachelor’s degree? Arguably among the most well-established and widely recognised vocational qualifications are HNDs, NVQs and BTECs; how will these fair with the introduction of the new T-levels? Will the T-levels be a complementary or alternative offering to these existing qualifications, and, again, how will under-funded schools and FE colleges cope in terms of resourcing them? Whilst the Chancellor is keen to maintain choice, in reality will this mean cutting back on the provision of existing vocational qualifications?

Even if there could be a smooth introduction for T-levels, there is the question of how they would lead to more training and qualifications. One can envisage that T-levels could lead either directly to an apprenticeship, or to a place on one of the new degree apprenticeships that should emerge in the next few years, much like A-levels are the most commonly accepted way of accessing bachelor degree programmes. However, again, the pathway of this route is not as smooth as the one into existing degrees.

Whilst the government proudly announces its claim about 1,000 new degree apprenticeships being formed, the system that alerts people to these opportunities is hard to find and tricky to navigate. The chances of a person finding the right degree apprenticeship for them is remote – at least without a significant personal investment of time and research effort trolling through university or employer websites. The UCAS website provides basic information about apprenticeships, questions to consider and how to apply. It also lists employers with current schemes and links through to the government’s apprenticeship website – but from there the application process proceeds on a case-by-case basis because applicants are considered to be applying for jobs. Degree apprenticeships should grow quickly in the next few years, given the compulsory levy, and assessing these entirely on a case-by case basis is likely to become increasingly bureaucratic and cumbersome for both the employer and the university partner – who both need to be satisfied the applicant meets their respective requirements. The T-levels, alongside the better-recognised and better-established vocational qualifications, could be used as publicly available entry criteria by the universities providing the degree apprenticeships on the UCAS website. The applications should be made through an expanded UCAS service so that one application could be sent to multiple degree apprenticeships. From there, universities could select applicants who meet their academic requirements in a first round of consideration, and then this subset could be forwarded for consideration by the employing organisational partner in a second stage of the selection process; together, these actors could make a decision as to the suitability of the applicant. This would streamline the process for applicants, universities and employers alike, reducing the opacity and confusion of a currently complex pathway between school, post-16-19, further education, higher education and beyond.

The announcement of T-levels is an interesting proposal, and a welcome one at that – but there needs to be deeper and more systemic policy-thinking about how its introduction and implementation, as well as that of the apprenticeship levy, will lead to a greater proportion of the future workforce having the requisite skills to raise UK productivity.

 

Is reform of social care doomed?

  

📥  health, Political sociology, Public sector

For people who have worked in UK public policy in recent decades, whether as civil servants, politicians or advisers, there is something wearily familiar, and depressing, about the current debate on the reform of social care. A fair chunk of the period I worked in No10 Downing Street, between 2007 and 2010, was spent on social care policy: on reports commissioned from the Prime Minister’s Strategy Unit, papers drafted by committees of civil servants working up options for cabinet sub-committees, notes for political discussions between ministers, party conference announcements, and even legislation. None of it went anywhere. Cross-party talks were scuppered by the Conservatives, the Treasury dug in against reforms considered fiscally unsustainable, and Labour malcontents in the House of Lords blocked legislation that they thought was partial and incoherent. Nor did it get much better after 2010 – Andrew Dilnot was commissioned to review social care funding, but his recommendations were kicked into the long grass, while local government spending on care services fell under the heaviest of axes.

social

 

Why has social care remained unreformed, when other public services have been subject to extensive, often unrelenting change? It is not simply lack of political will, though that has played a part. Nor can it be that the funding and organisation of social care is more complex and difficult to reform than other areas of public policy; pensions’ policy, for example, has been successfully reformed, on a largely consensual basis, in the last decade. The concepts of mainstream public policy analysis – punctuated equilibria, multiple streams analysis, or narrative policy frameworks through which policymakers make sense of the world – do not seem to provide much explanatory help. Instead, we should look to the political economy of welfare states.

The social care system (here taken to refer primarily to social care in England) is staffed by low-wage, largely non-unionised, predominately female employees working for private companies. There are no high-status, powerful professionals, like NHS hospital consultants, in social care – nor strong trade unions organising a high proportion of care staff. The workforce is heavily dependent on EU migrant labour. Services are mostly commissioned from private companies by local government, rather than provided by the public sector itself. Social care was kept separate from healthcare in the 1948 settlement, meaning that it has never benefited from the popular support and protective institutional aura of the NHS. Social care consequently does not generate institutional interests that are capable of powerful political expression: the labour voice is weak; professional vested interests are marginal; there is no national public sector body responsible for the service; and the business interest is uncoordinated.

Older people using social care are not politically mobilised, like parents of school children or NHS patients. Most of us are myopic about our future care needs; we tend not to plan ahead for the care we will need. For those suffering long-term conditions, like dementia, care will be needed for a long time – but for many of us, care services will be limited to end-of-life support of relatively limited duration. We know that we will need a pension for retirement, and health services throughout our lives, but not whether we will require social care. This means that the state is under limited pressure properly to fund and improve care services. In recent months, much of the political concern about social care has been generated by the knock-on impact that cuts to local government services have had on the NHS.

The social care systems of so-called liberal welfare states like the UK, Ireland, Australia and the USA, share many features. They are residual, relying heavily on limited means-tested safety nets, rather than providing universal coverage. Low levels of expenditure on means-tested assistance are funded from general taxation. At the same time, private care insurance is limited (non-existent in the UK case), but nor is there comprehensive social insurance or a compulsory care saving, as is typical of countries like Germany, France, Japan and Korea. Social care systems therefore tend to typify the welfare states of which they are a part: individualised, means-tested and general-taxation-funded liberal systems; universal, tax-funded Nordic systems in which care needs are decommodified; continental care systems that have developed from tripartite-funded (employer, employee and the state) social insurance systems; and East Asian systems in developed economies that have expanded compulsory care insurance coverage as their populations have aged, based on co-funded mechanisms.

Social care has also tended not to feature in Social Investment State (SIS) strategies that have dominated welfare state reform discourses in the UK and elsewhere since the 1990s. SIS conceptual frameworks prioritise employment and human capital investment, and privilege childcare and support for parental employment, over care of the elderly and adults with disabilities.

What then are the prospects for successful reform of social care in this latest round of policy debate? Substantively, the UK is unlikely to pursue the compulsory/social insurance or universal tax-funded reform options that have been developed in other welfare states – we lack the political economic foundations and politically mobilised social group interests for those kinds of reforms. More likely, ministers will tilt towards co-payment models or tax-incentivised private savings vehicles, with a floor of means-tested support. These will be partial and inegalitarian, however, since they do not pool risk across the population, and they tend to squeeze those who have income and assets just above the threshold for means-tests, while enabling those higher up the income and wealth distribution to buy better services, and forcing low-income families to rely on low-quality services – poor services for poor people. Meanwhile, ministers will put just enough funding into social care services to stave off collateral damage to the NHS, as the Chancellor did with an extra £2 billion over three years in his budget.

Pressure for change may depend on the politics of ageing. Turnout in UK elections is heavily skewed towards older voters, who currently form a solid bloc of support for the Conservative government. This demographic political inequality is commonly thought to explain why pensions and benefits for older people have received relative protection in the era of austerity, while inheritance tax is cut and wealth levies (the so-called "death tax") are abjured. Academic research into the politics of age is unfortunately more limited than that into social class or occupational groups (although it is a growing field and interest from think-tanks has been developing). The politics of social care may come to turn on whether the collective interests of older people and their families in the provision of properly-funded, comprehensive services, integrated with the NHS, can trump both the social class differences between them and the lack of broad coalitions of support that currently inhibit progressive social care reforms. For now, Whitehall watchers will not be holding their breath.

 

Expecting the unexpected: what resilience should mean to policymakers

📥  cities, future, sustainability

Dr Kemi Adeyeye is Senior Lecturer in Architecture in the University of Bath's Department of Architecture and Civil Engineering. This post draws on material first presented in a recent published paper.

Evidence, and perhaps the experience of seemingly perpetual rain on one’s face, suggests that the weather is one thing that is increasingly variable and difficult to predict. The impact of this goes beyond deciding whether to take an umbrella, or wear an extra layer of clothing, when you go out in the morning. Like other shocks, temperamental weather can and does affect various aspects of economic, environmental and social life. In an ideal world, both policy and the built environment would be developed with a level of inbuilt resilience (that is, the capacity to cope with and absorb shocks), a recognition of the need to adapt, change and reorganise, and measures to mitigate the impact of future shocks.

flooding

 

Indeed, most human and physical systems are designed to cope with ‘extremes’ – but often within the range of what is ‘expected’. ‘Unprecedented’ is now a common term used by politicians, the media and some experts to describe current weather events that are extreme, but not within the expected range of extremity. One unprecedented event soon supersedes the next, however, and the next one after that – so to what extent are these events really unprecedented? And to what extent can the impact and consequence of weather events such as flooding be considered a surprise? For scientific answers to these questions, I encourage the reader to review the work of my colleague Dr Thomas Kjeldsen. In this piece, however, I will spend some time considering the concept of anticipation, before concluding with what resilience should really mean to urban planners and policymakers.

Anticipating change

Studies show that, as human beings, we are ontologically programmed to engage in ideations that allow the anticipation of space, time, causality and subjective probability. This is referred to as our evolutionary potential[1] – i.e. our ability to promote preparedness and maximise the probability of proactive change through historical memory, knowledge, expertise and experience. Anticipation is innately formed through memory and experience rather than the unknown. To this end, we are prone to engage in mental time travel, reliving past experiences as the basis for imagining the future. However, we should also be aware of the fact that experiences are carried forward in time through memory (individual or collective), which means that such practices can affect welfare. That is, the effectiveness of memory and/or experience to engender actions and preparedness for resilience can vary depending on how we remember, with a consequent impact on the actual outcomes of shocks. The problem with relying too much on memory is that we soon forget – another useful evolutionary skill to help to cope with trauma.

Anticipation can be both forward- and backward-looking. Using the term ‘unprecedented’ suggests that the extent of our anticipation remains backward-looking, and this supports the prevalent reactionary approach to resilience – whereby capacity is only expanded after it has been overwhelmed by an extreme event. But we need both; forward-looking anticipation, particularly in the context of climate change, needs to be underpinned by past learning. Now, I am sure that scenario planning is taking place across the policy realms at present, building on our current tools and codes to explain and take action when the unexpected event happens. However, this approach does not always translate into dynamic planning for potential future uncertainties – when a comprehensive, flexible response may be required for the next unprecedented scenario.

Rising above the flood

Take flooding. There are some good social and economic reasons for current and future developments on or near water. There is also little choice in some instances. For example, most of the Netherlands lie several meters below sea level. As mentioned later, their planning and building developments have therefore advanced to effectively manage the associated risks. For others, flooding can be cyclical, but also sudden. This introduces general and specific issues to the equation to do with quality of life; economic, environmental and social vulnerability; security; physical, urban and building resilience; and so on.

These are factors that should not be ignored. The OECD forecasts that without effective change, the total global population exposed to flooding could triple to around 150 million by the 2070s due to continuous sea-level rise and increased storminess, subsidence, population growth and urbanisation. Further, asset exposure could grow dramatically, reaching US $35 trillion in the same period – roughly 9% of projected annual GDP. The NHS budget for instance is at present around 7% of UK GDP. Unlike the NHS, however, inaction on resilience is a bill that is best avoided. Exposure to risks does not necessarily translate into impact when resilience is “designed in” through coping and adaptive mechanisms.

So how can we design systems that are resilient and able to contend with unpredictable challenges, such as environmental change? Staying with the theme of flooding, we can learn from approaches that have worked at other times and in other places to better anticipate the future. We can learn not to be so set in our ways, but to dare to be flexible and embrace new ways of working. This is particularly important in the UK context, where our planning rules are entrenched in tradition and our design and building practices can be slow to evolve. Although innovative practices have started in some areas, changes remain piecemeal, and inconsistently applied across the country. Unlike global exemplars of building codes and standards, resilience requirements are still not explicit in the UK Building Regulations – so we are therefore missing out on a more consistent, widespread implementation, in addition to losing the opportunity to promote resilience alongside current sustainability standards, especially in housing developments.

Facing the future

Better integration of good governance, planning, infrastructure and architectural design would be a good first step towards closing the gap between where we are today and our future potential. On governance, there need to be visionary, non-ambiguous and tangible planning policies and regulatory requirements for resilience – particularly in the built environment. Formal building and planning policies, as they stand, could do more to promote forward-looking design and planning solutions, or to facilitate the development of resilience and adaptive capacity against natural events.

But new laws and regulations will not be enough. More should also be done to better equip individuals and communities for the task of planning and acting in their own best interests, or even actively participating in or influencing policy processes. It should also be possible to improve individual and collective anticipation by the positive utilisation of experiences of and effective responses to past climatic extremes – “memory”. Actions taken to improve agency by making better use of wider communication networks to provide access to information, raise awareness and improve action for resilience would also be a positive step.

Building resilience

Examples as old as the Indus Civilisation[3] and as contemporary as the Waterwijk in Ypenburg show that good governance and social measures are not enough on their own. Effective planning, good infrastructure and innovative architecture should be combined to reduce physical and social vulnerabilities. This underpins the argument for an integrated design approach to resilience (Figure 1).

Figure 1. Combined: Integrated resilience map showing applicability and impact [Read more]. The chart (after: Roberts 2013 ) presents combined case study findings along two axes, in four quadrants. The x-axis shows the contributions of important stakeholders including governance representatives; professionals such as architects, engineers and planners; and the people. The y-axis shows the physical outputs through planning, building and infrastructure solutions. The content of the map presents the physical and social solutions, highlighting impact (the size of the circles), and the range, based on the 6 applicability measures presented in the conceptual framework. In many instances the applicability measures overlap, and the map therefore shows the most relevant measure for the particular case.

Figure 1. Combined integrated resilience map showing applicability and impact
The chart (after: Roberts 2013 ) presents combined case study findings along two axes, in four quadrants. The x-axis shows the contributions of important stakeholders including governance representatives; professionals such as architects, engineers and planners; and the people. The y-axis shows the physical outputs through planning, building and infrastructure solutions. The content of the map presents the physical and social solutions, highlighting impact (the size of the circles), and the range, based on the 6 applicability measures presented in the conceptual framework. In many instances the applicability measures overlap, and the map therefore shows the most relevant measure for the particular case.

Policymakers and planners of the built environment who plan to adhere to such an approach should aim to achieve three major goals. Firstly, to deliver solutions that emphasise social place-making and capacity building – building communities whilst placing water at the forefront of communal consciousness, for example. Secondly, to implement resilient infrastructural solutions that are flexible but future-proof. Thirdly, to encourage solutions that are not all about hiding water in underground drainage networks, but rather integrate water into the social fabric of a community through planning, engineering and architectural design.

Collaborative working between policymakers and diverse stakeholders – including building professionals – is key to achieving this. Planners should work positively with architects and engineers to deliver the most effective solution possible within the individual context. Innovative architectural ideas and solutions should be encouraged and, further, the needs of the public should be fully integrated within the decision-making process. For this to happen, government departments will need to talk and work more effectively together at the national, regional and local levels. There also need to be better mechanisms to include knowledge agents and the public in solution-forming conversations; technologies such as smart web-tools, and innovative apps can help to facilitate this process.

 

[1] Read: Sahlins, M. D., and E. R. Service, editors. 1960. Evolution and culture. University of Michigan Press, Ann Arbor, Michigan, USA
[2] Roberts, C. (2013), Planning for Adaptation and Resilience, In: McGregor, A., Roberts, C., & Cousins, F. (Eds.). Two degrees: The built environment and our changing climate. Routledge.
[3] Part 1 of Dr Sona Datta’s BBC documentary series on the Treasures of the Indus may still be available on BBC Iplayer: http://www.bbc.co.uk/programmes/p030wckr/p030w89h

 

Sea-Changes in World Power

📥  Anglosphere, defence, International relations, Trump

In 1907, Theodore Roosevelt sent the US Navy battle fleet – the “Great White Fleet” of 16 battleships – on a symbolic tour of the Pacific. It was an awesome demonstration of the USA’s new naval power and an announcement to the world of its claims to dominion over the Pacific. The fleet was feted everywhere it went, but particularly so in Australia and New Zealand, where it was welcomed as the “kith and kin of the Anglo-Saxon race” bringing “a grateful sense of security to the white man in his antipodean isolation.” Japan was a rising military power. It had annihilated the Russian fleet in 1905. Racist attitudes towards Japanese migrant workers were running high in the USA and Australasia. “Stars and Stripes, if you please/Protect us from the Japanese”, wrote a New Zealand correspondent.

whitefleetfinal

 

Roosevelt saw the fleet’s tour in similar terms. He was resolved to treat the Japanese government with courtesy and respect. But he wanted to assert the importance of keeping the world’s “races” apart, particularly when it came to migration into California, and he inflected his Social Darwinist arguments with a class populism: “we have got to protect our working men”, he was reported to have argued. “We have got to build up our western country with our white civilization, and…we must retain the power to say who shall and who shall not come to our country. Now it may be that Japan will adopt a different attitude, will demand that her people be permitted to go where they think fit, so I thought it wise to send that fleet around to the Pacific to be ready to maintain our rights”[1].

Roosevelt was heavily influenced by the naval strategist Admiral Alfred Mahan, whose books on the importance of sea power and naval strength were key military texts in the late 19th and early 20th century, read and absorbed not just by US foreign and defence policymakers, but by their counterparts in the capitals of all the leading world powers – including Great Britain, whose naval prowess he much admired. He was also highly influential on Roosevelt’s fifth cousin, Franklin D. Roosevelt, who devoured Mahan’s books as a young man and was a lifelong navy enthusiast, serving as Assistant Secretary for the Navy in Wilson’s administration. As President, FDR would massively expand the US Navy. Spending on the navy – a sort of naval Keynesianism – gave renewed impetus to the New Deal in the late 1930s.

Donald Trump’s speech at the Newport News shipyard, which builds ships for the US Navy, and his pledge to expand the fleet to 350 ships, therefore stands in a clearly defined lineage. It heralds a renewed commitment to assert the naval primacy of the USA and significantly boost military spending. On its own, that might be lifted straight out of the recent Republican playbook – particularly in concert with tax cuts for the wealthy. But Trump’s economic nationalism and his anti-Muslim, anti-immigration rhetoric also trace a line back to fin-de-siècle Anglo-Saxonist political discourse. His rhetoric symbolically connects the projection of economic and military power to the fortunes of the American working class, particularly the white working class – Teddy Roosevelt shorn of the progressivism and diplomatic tact.

This time, of course, the main antagonist is China, not Japan. China’s navy has been expanding rapidly under Xi Jinping’s leadership. It has commissioned new missile carriers, frigates, conventional and nuclear submarines, and amphibious assault ships. A close ally of Xi’s, Shen Jinlong, has recently been appointed its commander. It has moved from defensive coastal operations to long-range engagements around the world. It will serve to underpin China’s assertion of supremacy in the South China Sea and the projection of its power further afield – towards the Indian Ocean, the Gulf and the Maritime Silk Road routes.

The respective strength and reach of national navies can mark out wider shifts in geo-political power. It was at the Washington Conference in 1921 that the USA finally brought the Royal Navy to heel, insisting on parity in capital ships, and setting the seal on the end of the British Empire’s global maritime supremacy. “Never before had an empire of Britain’s stature so explicitly and consciously conceded superiority in such a crucial dimension of global power,” wrote Adam Tooze of this capitulation. It would take until the late 1960s, when Britain finally abandoned its bases East of Suez, for the process of imperial contraction to be complete (a decision that the current Foreign Secretary laments and risibly promises to reverse).

With tension rising in the South China Sea, war and rival power conflict in the Middle East and the Gulf region, and the prospect of a scramble for power over the sea lanes of the melting ice caps of the North West Passage, this new era of naval superpower rivalry echoes the Edwardian world. Steve Bannon, President Trump’s self-declared economic nationalist adviser, believes it will end the same way: in war. It is up to the rest of the world to prove him wrong.

 

 

[1] For this quotation and other source material, see Marilyn Lake and Henry Reynolds, Drawing the Global Colour Line, Cambridge: CUP (2008), Chapter 8 pp 190 - 209

 

Timing it wrong: Benefits, Income Tests, Overpayments and Debts

📥  employment, future, policymaking, Welfare

Professor Peter Whiteford is a Professor in the Crawford School of Public Policy at the Australian National University and Professor Jane Millar is a member of the Institute for Policy Research (IPR) Leadership Team, in addition to her role as Professor of Social Policy at the University of Bath.

Unexpected bills can be a challenge for any household. But for people who rely on social security payments, unexpected news of a significant debt – sometimes dating back years – can be bewildering to say the least. This is exactly what tens of thousands of Australians have experienced in recent months.

Since just before Christmas, Centrelink’s use of a new automated data-matching system has resulted in a significant increase in the number of current and former welfare recipients identified as having been overpaid and, thus, being in debt to the government. The data-matching system seems to have identified people with earned income higher than the amount reported when their benefits were calculated.

timing

 

Many of these people were alarmed when Centrelink contacted them about the assumed debt. Their stories have been recounted over the past two months in the mainstream media and social media. The controversy prompted the Shadow Human Services Minister Linda Burney to request an auditor-general’s investigation. After receiving more than one hundred complaints about problems with the debt-recovery process, independent MP Andrew Wilkie asked the Commonwealth Ombudsman to step in, and he has since launched an investigation. The Senate Community Affairs References Committee will also examine the new process.

This is by no means Australia’s first social security overpayment controversy. The last storm was sparked by the expansion and fine-tuning of family tax benefits in 2000. Under that new system, families were given the option of taking their payments as reductions in the income tax paid on their behalf by their employer. To ensure that this group was treated in the same way as those who received cash benefits from Centrelink, the government introduced an annual reconciliation process. Before the beginning of each financial year, families were asked to estimate what their income would be in the subsequent tax year; later, after they had filed their tax returns, an end-of-year reconciliation process would bring income and family benefits into line.

This seemed like a rational system. People who had been underpaid could receive a lump sum to ensure their correct entitlement. People who had been overpaid would pay back the money that they weren’t entitled to keep. The reconciliation would correct any mistakes people made when they estimated their income for the year ahead (not necessarily an easy task to get right!) and make the system responsive to changes in income during the year.

But many families’ estimates at the start of the year proved to be poor guides to income received during the year. This happened in both directions – some estimates were too high, some too low – but most often real annual incomes were higher than predicted. The result was a very large increase in overpayments and, thus, in debts. Before the new system was introduced, just over 50,000 families had debts at the end of each year; in the first year of the new system, an estimated 670,000 families received overpayments. Overall, around one third of eligible families incurred an overpayment in the first two years of the new system.

This is how the system was designed to work. But for the families who found themselves owing sometimes large and usually unexpected debts, the experience created confusion, stress and anger. It also generated considerable controversy in parliament and the media. So, in July 2001, just before an important by-election, the Howard government announced a waiver of the first $1,000 of all overpayments, which reduced the number of families with debts to around 200,000. Further fine-tuning came in 2002, also aimed at reducing overpayments and debts. Then, in 2004, an annual lump sum was added to family tax benefit A with the aim of offsetting any overpayments.

***

At around this time, Britain was designing and introducing a new system of tax credits for people in work (the working tax credit) and for families with children (the child tax credit). The system had some features in common with the Australian approach, had some features in common with the Australian approach, including an end-of–year reconciliation. The British government was keen to avoid the sort of controversy that had blown up in Australia, so it included a mechanism for changing the level of tax credit not just at the end of the year but during the year as well.

The assessment for credits was initially made on the basis of gross family income in the previous tax year. If recipients reported changes in income and circumstances during the year, then the award was adjusted, and at the end of the year total credits and income were reconciled. But many changes in income and circumstances went unreported during the year and so, in practice, considerable adjustment was required. Over the first few years of the system, about 1.9 million cases of excessive credits occurred each year.

As in Australia, the system caused significant hardship and generated adverse media coverage and much concern. In 2005 and 2006, the British government introduced a number of changes designed to reduce overpayments, including a very substantial increase in the level of the annual income “disregard” from £2,500 to £25,000. This meant that family income could rise by up to £25,000 in the current award year before tax credits were reduced. The amount has since been brought back to the original £2,500, which will probably mean overpayments will start to rise again. Processes exist for recovering overpayments of tax credits and housing benefits, and these sometimes attract some media attention, most recently in relation to the use of private debt collectors.

***

Together with the current Centrelink controversy, the experience of these earlier cases offers four main lessons for social security policy.

First, getting payments “right” in any means-tested system is a complex process necessarily involving trade-offs between responsiveness and simplicity. If the aim is to precisely match income and benefit in real time, then there must be constant updating and checking of income and adjustments of benefits. But such a system would be very intrusive and administratively complex. So systems are designed to pay first and reconcile later, which makes overpayments almost inevitable.

Governments can minimise the impact by disregarding some overpayments, as both Australia and Britain have done in the past. But that is not part of the design of Australia’s latest program of debt recovery. People are being chased partly because the Budget Savings (Omnibus) Act 2016 toughened repayment compliance conditions for social welfare debts. New conditions include an interest charge on the debts of former social welfare recipients who are unwilling to enter repayment arrangements, extended Departure Prohibition Orders for people who are not in repayment arrangements for their social welfare debts, and the removal of the six-year limitation on debt recovery for all social welfare debt.

People ardently dislike systems that they don’t understand and feel are unfair, or that seem to create debts beyond their control. A very stringent approach to collecting overpayments can cause real hardship and generate controversy. It has even been suggested that there may be a punitive element to this, with Centrelink staff not encouraged or required to help people to correct errors.

Second, IT systems are not by themselves the cause of these problems. It is easy to blame the technology when things go wrong, and some problematic factors do indeed appear to be technological. The names of employers provided to the Australian Tax Office and Centrelink don’t always match, for example, and it appears that in some cases the same income is counted twice because the assessment process matches names rather than Australian Business Numbers.

More significantly, Centrelink’s formula can produce false estimates of debts when individuals are asked to confirm their annual income reported to the Australian Tax Office, because it simply divides the reported annual wage by twenty-six. That overly simplified calculation will only produce a useful figure if individuals receive exactly the same income each fortnight, which is often not the case, especially for casual workers, students and other people with intermittent work patterns.

But these problems are not necessarily the fault of the IT, which is only doing what it has been designed to do. More checking by humans would probably reduce errors, but outcomes that result from the design of the policy can’t be resolved by technical fixes.

Third, IT systems are not by themselves the solution either. It is possible that the earlier problems with overpayments of family tax benefits may recur very soon. In early February, the federal government introduced a new omnibus savings bill to parliament, combining and revising several previously blocked welfare measures into a single piece of legislation in order to save nearly $4 billion over the next four years, after allowing for increased spending on childcare and family tax benefits. By far the most significant of the projected savings in the bill – $4.7 billion over four years – results from phasing out the end-of-year supplements for family tax benefit recipients, which were introduced to solve the overpayment and debt problems referred to earlier.

So why would the government think that the overpayment of family payments and the subsequent debt problem will be resolved, as this saving seems to assume? The answer is not entirely clear, but seems to relate to the update of Centrelink’s computer system announced in 2015. “The new technology to underpin the welfare system will offer better data analytics, real-time data sharing between agencies, and faster, cheaper implementation of policy changes,” Marise Payne, then human services minister, said at the time. “This means customers who fail to update their details with us will be less likely to have to repay large debts, and those who wilfully act to defraud taxpayers will be caught much more quickly.”

Complementing the Centrelink update are proposed changes in reporting systems at the Australian Tax Office, particularly the introduction of a single-touch payroll system. Under the new system, when employers pay their staff, the employees’ salary or wages and PAYG withholding amounts will automatically be reported to the Tax Office, which can then share this data with Centrelink.

The government seems to be assuming that computer and system updates will provide a technological fix to the problem of family tax benefit overpayments – and thus deliver a saving of $4.7 billion over the next four years. But what if the new IT systems don’t work in the ways envisaged? The Australian Tax Office’s computer system has crashed a number of times over the past year. Indeed, in the very same week that the government introduced the new omnibus savings bill, newspaper reports of this “tech wreck” suggested that the Tax Office might not be able to guarantee this year’s lodgement of returns in time for the start of the new financial year. The reports also noted that the development of the single-touch payroll system would remain one of the Tax Office’s priorities for this year.

Finally, to reiterate our first point, these problems have arisen from policy choices and design. Britain is introducing a new system, Universal Credit, which will use real-time adjustments to track changes in earnings and seek to match awards to income on a monthly basis. How well this will work in practice remains to be seen. In both countries, trends towards more insecure and variable employment patterns – and hence irregular pay packets – will make balancing accuracy and timeliness in means-tested welfare benefits more difficult. The assumption of regular and unchanging income no longer holds, and this new reality requires a policy, not a technical, solution.

This piece originally appeared on INSIDE STORY.

 

Labour’s weakness leaves the Tories free to do as they please

📥  political parties, Political sociology, voting

This article first appeared in the Financial Times.

Soul-searching about the electoral prospects of the Labour party has been a British political pastime for decades. After Labour’s defeat at the 1959 general election, Anthony Crosland, the party’s pre-eminent revisionist intellectual, published a Fabian pamphlet entitled “Can Labour Win?” His argument was that economic growth had shrunk the industrial working class and swelled the ranks of an affluent middle class, transforming the electoral battleground on which Labour had to fight.

punchingbag

 

Pamphlets and polemics have been published with variations on that theme ever since, always after Labour has lost elections. With the exception of a bout of civil war in the early 1980s, Labour has responded to each defeat by seeking to broaden its appeal and modernise its policies. In each era, it has succeeded in getting re-elected.

The results of Thursday’s by-elections paint a bleaker picture, however. It is not simply that Labour’s current leader, Jeremy Corbyn, is unpopular, or that his brand of reheated Bennism holds little appeal for most voters. The chances of his leading Labour into the next general election must now be considered minimal. It is that in the heyday of postwar social democracy, Labour won handsomely, whatever the national result, in seats like Copeland (which it lost on Thursday) and Stoke-on-Trent Central (which it held with a reduced majority).

Since then, three things have happened in these constituencies and others like them: turnout has fallen dramatically, the number of parties contesting the seats has multiplied and the Labour majority has been slashed. The party’s grip on power in its historic strongholds is now more tenuous than at any time since the 1930s, when it was split and faced a popular National government.

Until relatively recently, Labour could rely on its working-class supporters, even as the industrial society that shaped their allegiances steadily disappeared. Today, age and social class inequalities in voting patterns work decisively against the party. Older, middle-class voters turn out in much greater numbers than working-class and younger voters, which disproportionately benefits the Conservatives. Theresa May has been adept at consolidating this older voting bloc behind her government.

The prime minister has used the Brexit vote to offer a new configuration of Conservative politics that is both Eurosceptic and post-Thatcherite, detaching the interventionist, One Nation economic and social traditions of the party (at least in rhetoric, if not yet in practice) from its enfeebled pro-European wing. It is an electorally potent combination, which has had the effect, not just of boxing Labour into liberal, metropolitan Britain, but of holding down the UK Independence party’s vote.

Breathless post-Brexit talk of Ukip eating away the core Labour vote in the north of England has now given way to a more sophisticated appreciation of the flows of voters between the parties — flows from which the Conservatives, and to a lesser degree the Liberal Democrats, appear to be the winners.

Britain’s new electoral geography has also undermined Labour. Once, the party could bring battalions of MPs to Westminster from Scotland, Wales and northern England, where it was indisputably dominant. Now it fights on different fronts against multiple parties across the UK, a national party in a fracturing union. In Scotland, its support has been cannibalised by the Scottish National party, while the Conservatives have picked up the unionist vote there.

In Wales, party allegiances have split in different directions, while in England, the collapse of the Liberal Democrats at the last general election handed a swath of seats to the Conservatives. The EU referendum added another layer of complexity, splitting coastal, rural and post-industrial areas from cities and university towns, and leaving Labour facing in different directions, trying to hold together a coalition of voters with divergent views.

Any Labour leader would struggle in these circumstances — renewing the party’s fortunes at a time of national division is a monumental task. But it is now clear that the surge of support for Mr Corbyn in 2015 was less a new social movement giving energy and purpose to the Labour party, than a planetary nebula collecting around a dying star.

Labour’s weaknesses leave pro-Europeans bereft of political leadership at a critical time. In the absence of an effective opposition that can marshal blocking votes in parliament, the government is able to conduct the politics of Brexit internally. Countervailing forces are restricted to alternative centres of power, such as Scotland or London, and civil society campaigns that are only just starting to form. Big business is curiously mute and the trade unions have other priorities. On the most important question facing Britain, political power is dangerously lopsided.

Yet there are still grounds for optimism on the left, however small. Britain’s radical political traditions — liberal, as well as social democratic — are resilient and resourceful ones, particularly when they combine forces. The defeats inflicted on progressive parties in recent elections around the world have been narrow not decisive, suggesting that talk of a nationalist turn in the tide of history is overblown. While British Conservatism may be remarkably adaptive, Brexit will be a severe test of it.

Five years after Crosland posed the question of whether Labour could win, Harold Wilson became prime minister in a blaze of the “white heat” of technology. It will not be Mr Corbyn, and it will take a lot longer this time, but Wilson may yet have a successor who can do the same.

 

Shifting the public conversation on mental health – understanding the social conditions that shape private troubles

📥  health, policymaking

Professor Simone Fullagar is Professor of Sport and Physical Cultural Studies in the University of Bath's Department for Health

Mental health professionals, NGOs and a variety of service-user groups have all called for greater funding for local and global mental health services, as well as for greater parity of esteem between these services and broader health policy and service provision in the UK. The Mental Health Taskforce’s 2016 report details the need to address chronic under-spending on mental health services in the UK as demand continues to increase and inequalities widen. NHS spending is increasing in areas that support a medicalised response to mental health issues, with prescriptions for antidepressant medication doubling over the last decade in the UK. The taskforce’s report recommends a billion-pound investment in 2020/21 and calls for fresh thinking to shift cultural attitudes that stigmatise mental ill health as an individualised problem. Recently Theresa May announced a review of child and adolescent services in England and Wales and investment in mental health first aid training for schools. This is an important step, but how far will it go, given that from 2010 to 2015 there was a reduction of 5.4% in the funding of child and adolescent mental health services in the UK?

greenribbon

 

Young people are a major focus of concern, as they suffer from high rates of depression, anxiety, eating disorders and are vulnerable to developing more severe and enduring conditions. National survey data indicates a worsening picture for young women (15-18), who have the highest rates of depression and anxiety in the UK. Suicide rates have increased, with young men experiencing higher rates of suicide than young women, who in turn have higher rates of hospital admission for self-harm. One in four (26%) women aged 16 to 24 identify as having anxiety, depression, panic disorder, phobia or obsessive compulsive disorder.

The case for greater funding for mental health services is supported by a growing body of evidence which points to the value of investing in appropriate support and early intervention. Recent psychological research in the UK identified how different therapeutic approaches (cognitive behavioural therapy (CBT) and psychosocial interventions) for adolescent depression have been found to have similar beneficial effects. Across different approaches there is a common thread emphasising the importance of developing a ‘therapeutic alliance’ with a young person so they are able to effectively engage with support (feeling heard and respected, avoiding further stigmatisation, being involved in coproducing services, etc). This question of what works best for young people with a range of needs and diverse social backgrounds is an important one, given the role of the Improving Access to Psychological Therapies programme in increasing access to psychological therapy via CBT as a technical formula. Research has identified that 40–60% of young people who start psychological treatment also drop out against advice. A high proportion of people also do not seek help from professionals despite the recurrence of common mental health issues. All these factors point to the complexities surrounding clinical and community-based mental health provision. A positive shift in recent years has been an increasing recognition of the importance of involving people with lived experiences in the coproduction of localised services that move beyond privileging biomedical treatments, and support a recovery-oriented approach (for example, the Wellbeing College for adults has been created in Bath).

While this focus on funding more personalised support is incredibly important for people experiencing all kinds of distress, we also need broader public conversations and policy approaches that offer a critical understanding of how private troubles connect with our public lives to acknowledge the social determinants of mental health. Mental health problems are associated with social injustice, marginalisation and the embodied distress of trauma – poverty, discrimination (class, gender, sexuality, ethnicity etc), poor housing, unemployment, social isolation, gender-based violence, childhood abuse and intensified bullying in the digital age. In the context of austerity measures and cuts to public funding across a range of areas, it is perhaps not surprising that private troubles and social suffering are exacerbated.

Mental health and illness are also highly contested concepts with diverse, and often competing, trajectories of thought about biopsychosocial causes and conceptualisations of distress. Public knowledge of ‘mental illness’ is historically shaped by our diagnostic cultures of psy-expertise (from the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) to digital self-assessments), the rise of brain science and research funded by Big Pharma, and the less-often-heard accounts of those with lived experiences (including a diverse range of identities – service users, consumers, and members of anti-psychiatry, hearing voices and mad pride movements). While there is often great media interest in studies claiming to identify the biological cause of problems in the brain (often visualised via high tech images), many people would be surprised to know that there are no specific biomarkers for ‘mental illness’ – and theories about why anti-depressant medication works for some people (and with similar effects to placebo and other non-pharmacological treatments), are based on hypothesis rather than fact.

If we look at the national data cited earlier we can see how gender figures as an important variable – yet there is a curious absence of gender analysis in the context of mental health policy and service provision despite the growing research in this area.  My own sociological research into women’s experiences of depression and recovery identified the often highly problematic effects of antidepressant medication that was prescribed to help them recover. Women spoke of how their embodied distress was heightened by side-effects, and how feelings of emotional numbness exacerbated their sense of ‘failing’ to recover despite following expert biomedical advice. Suicidal thoughts and attempts were evident alongside guilt about not living up to the normative ‘good woman ideals’ of self-sacrificing mother, productive worker or caring wife. Others identified a feeling of being paradoxically trapped in a sense of dependency on a drug that helped them to feel more ‘normal’ and thus able to manage the gendered inequalities and pressures of their lives with demanding caring roles, work or unemployment. Restrictive gender norms, experiences of inequality that intersect with class, ethnicity, religion, sexuality and age, as well as a lack of gender-sensitive provision within mental health services and beyond (childcare, housing, domestic violence support, access to low-cost community activities that support well-being) were key policy related issues. The policy challenge ahead of us is to understand the complexity of how mental health is affected by, and affects, all aspects of social life. Social science research has a unique contribution to making critical issues (such as gender inequalities) visible in the development of a whole range of approaches, decision-making processes about resources and public dialogue about how we understand the social conditions that shape distress and support wellbeing in the contemporary era.