Preventing violent extremism: the problem of meaningful evaluation 

Posted in: Evidence and policymaking, Security and defence

Giorgia Iacopini is Senior Researcher and Consultant at the Tavistock Institute of Human Relations in London, and is studying on the IPR's Professional Doctorate in Policy Research and Practice (DPRP). This blog is based on a conference paper that was accepted for the BISA Critical Studies on Terrorism working group's 2017 annual conference.

 

Preventing Violent Extremism (Prevent) is part of the UK’s counter-terrorism policy (CONTEST), established in 2006 in response to the London Bombings in 2005. As the name suggests, the aim of Prevent is to stop people from being drawn into terrorism and therefore operates in what is commonly referred to as the ‘pre-criminal’ space. Central to the policy’s theoretical foundation is the notion of ‘radicalisation’, defined as the process through which individuals are potentially drawn into violence[1].

Prevent can be seen as part of a much wider effort, in Europe and globally, to address what has become an important global security concern.  Indeed, over the past decade there has been an exponential growth of what are defined “countering violent extremism” (CVE) policies, programmes and initiatives which, like Prevent, are specifically aimed at deterring individuals from radicalising to violence.

But is all of this working?

Despite the decade-long investment, answering this question is still not straightforward. Indeed, policy makers, practitioners, and researchers across Europe (and worldwide) have increased calls for identifying 'what works' in interventions that seek to counter violent extremism. This has resulted in growing volumes of reviews and debates on the ‘state of the art’ of CVE evaluations, as well as efforts to pull together examples of possible metrics to identify ‘success’ and guidance on the evaluation approaches available[2].

On the whole, however, one of the main criticisms made against policies and programmes in this field is that they lack empirical data around their effectiveness. One could argue that the considerable evaluation challenges in this area of work have been a constant feature. Why is this the case?

First, evaluating preventive efforts against radicalisation or violent extremism runs into the methodological challenge that is frequently encountered in evaluations that are required to establish ‘what works’, which relates to the extent to which it is possible to identify causal linkages in a field where there are many intervening variables (complexity). This is the so-called ‘attribution dilemma’ (ie. being able to prove that the intervention alone was responsible for an observed change). In turn, demonstrating impacts in this field requires, ideally, studies that track changes (in attitudes, or behaviours, for example) associated with the projects over a significant period of time. In the UK at least, this is hindered by commissioning cycles, which see local projects not funded long enough for outcomes to emerge – a common obstacle for many third-sector organisations[3]. In the case of Prevent, the problem is compounded by yearly funding streams (rather than multi-year funding), creating a situation in which there is a pressure to demonstrate results in a very short period of time. The implication is that it becomes almost impossible to explore potential changes that may take many years to materialise.

A third set of challenges is represented by the fact that Prevent work is situated in a policy and practice field that is continually evolving (evolution). Policy directions and innovations are constantly refined and updated, and theory and practices around the causes of violent extremism and what can be done to promote awareness-raising and/or behaviour change continually evolve.

Finally, there is the challenge related to variety of local delivery (context-specificity). To put it bluntly, local context matters a great deal for success. To have any chance of succeeding, providers have to design interventions that fit the local need and are locally-relevant, which requires consideration of the cultural, social characteristics of the area. The implication here is that Prevent delivery requires creative solutions; identifying ‘one size fits all’ approaches or developing common measurements of success that bypass local specificity is potentially inappropriate.

To sum up, therefore, the core evaluation challenges relate to: the complexity of Prevent, which makes attribution difficult – perhaps impossible – to establish; the evolution that characterises the policy and practice field; and the context-specific nature of its implementation. By extension, evaluating Prevent would greatly benefit from evaluation approaches that can: overcome the problem of attribution; capture the complex and sophisticated nature of Prevent implementation; and take into account local variation.

In Prevent, however, the evaluation ‘trend’ has gone in a different direction, albeit in an effort to strengthen – rather than hinder – it[4]. The conceptual, organisational and operational shifts in the policy which occurred in 2009, 2010 and 2015[5] can perhaps explain the challenges of its evaluation.

The first set of shifts saw Prevent move (and widen in scope) from a focus on targeting violent extremism to also targeting non-violent extremism, raising the importance of ‘ideology’ as a key ‘root cause’ that can draw a person into violence. This also included a focus on identifying ‘individuals at risk’, partly done to overcome concern that local Prevent delivery had not been focused or ‘hard edged’ enough (hence not addressing the challenge). As a result, the second shift related to the policy’s ownership: Prevent went from being a joint endeavour between the Department for Communities and Local Government (DCLG) to being the responsibility of the Office for Security and Counterterrorism (OSCT). This resulted in increased centralisation (with the OSCT being in charge of policy direction and funding) and reduced funding for projects that were not deemed in line with Prevent’s aims. The third, and most recent, change was the establishment of the Prevent duty in 2015, which for the first time required public sector institutions (local authorities, schools, colleges, universities, health services, prisons and probation) to formally assess the risk of radicalisation.

These changes guided Prevent’s monitoring and evaluation approach. First, the rise of ‘indicators of risk’ as a way of assessing success translated into a preference for evaluation approaches that were positivistic in nature – ie. (quasi) experimental designs. This is very much in line with the rise of ‘evidence-based’ policymaking which is rooted in a largely quantitative/positivistic paradigm, as illustrated by the notion of the ‘hierarchy of evidence’ which places experimental designs at its top[6]. The idea is that this evidence-based knowledge can provide clear and confident answers about ‘what works’, offering a blueprint able to assist politicians and policymakers in their decisions about which interventions to fund, thus avoiding wasting public resources on those that do not work. Second, the move towards greater centralisation, and the therefore reduced local autonomy to design interventions, was perhaps an attempt to ‘solve’ the problem of local variation in project delivery.  Third, the Prevent duty was a way of collecting consistent data on the “number of individuals within interventions programmes or total expenditure on Prevent in a sector” (input) and “the number of individuals no longer assessed as being vulnerable or a reduction of risk individuals no longer assessed as being vulnerable” (impact)[7].

From an evaluation point of view, this is understandable, and perhaps even seductive. Indeed, reducing local variation would enable more straightforward​​ comparisons between interventions. This would facilitate the task of understanding of what appears to work best, generating a pool of knowledge able to inform decisions about which interventions to replicate.

But is this really the case?

Arguably not. Reducing the complexity of the policy, or the nature of the challenge or the opportunity for local variation also reduces the opportunity to evaluate meaningfully. Prevent displays characteristics of what are now being commonly defined as complex interventions. These are: nonlinearity and unpredictability of change processes (ie. the factors that lead people to violence are extremely complex and varied, and it is difficult to isolate how a single programme affects the many factors that contribute to a person’s involvement in violent extremism); local adaptation of projects, which speaks to the importance of context; multi-actor delivery; and multiple interventions being delivered at the same time and that therefore influence each other, making ‘attribution’ difficult to ascertain[8]. Individual programmes can therefore only be evaluated meaningfully against indicators that are context-specific. This runs counter to the standardisation drive that has characterised Prevent since 2011 and even more so from 2015.

The perhaps more significant question that we may need to ask ourselves is the extent to which the demand for a certain kind of evaluation would be seen as valid by those whose task it is to translate the policy into practice. The more prescribed approach to Prevent following the more recent changes in the policy and the subsequent reduction of local autonomy has not removed the need, locally, to ‘fit’ the project in the community. Standardising approaches to delivery and to data collection tools therefore may not provide an adequate understanding of project implementation.

This speaks to one of the biggest challenges in evaluation, which relates to the need to bring together ‘scientific’ and ‘stakeholder’ credibility[9]. The former relates to the generating evidence that will stand up to scrutiny. The latter refers to the extent to which those affected by the evaluation have a say in its design, or see its relevance. Without stakeholder credibility, the ability to use evaluation findings is absent. The difficulty stems from the fact that these two aspects may be at opposite ends: scientific rigour (ie. as close as possible to the aforementioned ‘gold standard’) might either be impractical to fulfil or may not be seen as useful to stakeholders due to its ‘reductionist’ philosophy that clashes with their (often more complex) views on social, community problems and how to solve them.

Seeking causality between an intervention and observed results is of course a crucial part of evaluation and (quasi) experimental approaches are undoubtedly important for this endeavour. However, they tend to have linear notions of how change happens (A causes B). While this works well where there are known solutions to problems and stable contexts, it is problematic for interventions that that do not have these attributes. Should we therefore be making make Prevent evaluation explicitly complexity-consistent? This would mean designing an evaluation that supports ongoing learning and reflection; surfaces the diverse assumptions about how and why changes occurs; and explores initiatives’ progress and impact through evaluation findings that are grounded in local contexts[10].

 

 

Footnotes

[1] HM Government, 2011b. Prevent strategy 2011 - GOV.UK.

[2] For example:

  • Feddes, A.R. and Gallucci, M. 2015 A Literature Review on Methodology used in Evaluating Effects of Preventive and De-radicalisation Interventions. Journal for Deradicalization, 0(5), pp.1–27
  • Fink, N.C., Romaniuk, P., and Barakat, R. Evaluating Countering Violent Extremism Programming: Practice and Progress.
  • Lindekilde, L., 2012. Value for Money? Problems of Impact Assessment of Counter-Radicalisarion Policies on End Target Groups: The Case of Denmark. European Journal on Criminal Policy and Research; Amsterdam, 18(4), pp.385–402.
  • Romaniuk, P., 2015. Does CVE Work? Lessons Learned From the Global Effort to Counter Violent Extremism

[3] Harlock, J., 2013. Impact measurement practice in the UK third sector: a review of emerging evidence.

[4] Mastroe, C., 2016. Evaluating CVE: Understanding the Recent Changes to the United Kingdom’s Implementation of Prevent. Perspectives on Terrorism, 10(2).

[5] Thomas, P., 2014. Divorced but still co-habiting? Britain’s Prevent/community cohesion policy tension. British Politics; Basingstoke, 9(4), pp.472–493; O’Toole, T., Jones, S., and DeHanas, D.N., 2011. The New Prevent: Will It Work? Can It Work?’ Muslim Participation in Contemporary Governance Working Paper, No. 2,.

[6] Cairney, P., 2016. The Politics of Evidence-Based Policy Making. Springer Nature.

[7] HM Government, 2015. Prevent duty guidance - GOV.UK.

[8] Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., and Befani, B., 2012. Broadening the range of designs and methods for impact evaluations: Report of a study commissioned by the Department for International Development.

[9] Chen, HT, 2015. Practical program evaluation: theory-driven evaluation and the integrated evaluation perspective. Second edition. Thousand Oaks, Calif.: SAGE.

[10] Preskill, Hallie and Gopal, Srik (2014): Evaluating Complexity Propositions for Improving Practice, FSG,

 

Posted in: Evidence and policymaking, Security and defence

Respond

  • (we won't publish this)

Write a response