If you set out to change something, you’ll be asked to measure the change made. That’s been our experience since we began our work creating a positive culture of public engagement with research over the last nine years. Our approach has been dynamic since 2012 and here I wanted to share some of the ways we’ve evaluated culture change.

Culture change at the University of Bath

The Public Engagement Unit was formed as part of the RCUK (now UKRI) Public Engagement with Research Catalysts funded culture change project Engaged360@Bath (2012-2015). When I am talking about our work to colleagues, nearly the first question I get asked is: “how do you measure culture change?”. I’m not sure I have a pithy answer for this one, apart from saying that it’s hard! This post is a reflection on how we’ve evaluated culture change at the University of Bath and is some of the ways that have worked for us, but I’d love to hear others’ views and perspectives on their experiences of evidencing culture change in their organisation. Do feel free to drop me a line and let's compare notes.

Our approach to evaluating culture change

Our evaluation approach has evolved over time. This has been in response to the changing environment and drivers around us and the stage we'd reached on our culture change journey. These represent key moments that have shaped our approach to evaluation.

Driver one: project reporting

Our Engaged360@Bath project came with reporting requirements which included an evaluation of what we had achieved over three years. We used three tools to help frame our work.

  • A logic model. We created a model that included our planned interventions, outputs, outcomes and plans for what this new culture could look like in the long term (10 years in the future) at the very beginning of the project. It helped us develop a shared understanding of what we could do, why, and what the outcomes might look like.
  • The NCCPE’s EDGE tool. Using this framework helped to provide a baseline assessment of public engagement at the University that informed the types of interventions we could deploy.
  • Complexity theory model of change. Because of the nature of a university - as a large organisation where staff have a high degree of autonomy - we adopted complexity theory as our underpinning model of change. This model relies on creating the conditions for change by initiating various interventions rather than directing the change itself. Using this model also helped give us boundaries on what we could effectively measure, i.e. the interventions we owned, which helped us focus our evaluation approach.

So, we had these three tools: the logic model, complexity model, and the EDGE tool, which helped us understand what we could evaluate. The EDGE tool helped inform where we would need to put interventions in place, and the logic model helped us define these interventions and the expected outputs we could report on. The complexity framing meant that we needed to keep our eyes open to emergent evidence of change, but that it would be best to focus on the interventions that we had control over.

Reflective practice

The key to our overall approach in this initial phase of our work was adopting a mindset of reflection. It helped us in several ways. It was essential to the successful use of a complexity model as we had to keep an eye out for emergent learning as a result of our interventions, it also meant that we never had to shoe-horn evaluation into our work afterwards. We reflected on our meetings, were alive to indicators such as the language used by colleagues to describe engagement and shared our thinking with colleagues via blogs. It was simply how we did our work. It also provided an ongoing body of evidence that we could use to inform decisions such as whether to start, continue, tweak, or stop an activity. It also created a bank of resources that were informed by our colleagues’ experiences based on the University of Bath context, rather than our own practitioner experiences from other settings. One example of this is our Top Tips guide which we synthesised from the end of project reports that our colleagues submitted after they received funding from us for pilot work.

End of project report

We produced an end of project report, which included a full-scale evaluation. The evaluation reported against the objectives and project plan that had been proposed at the beginning. It wasn’t really an evaluation of culture change because you can’t affect long term change within three years. We could only provide evidence that we’d put things in place (created the conditions for change), and that we had started the journey.

Driver two: making the case for funding

As well as producing our evaluation and end of project reports for funders, we were also making the case, as a Unit, for continued financial support from the University. We ended up having to do this twice. First as an extension of three years to sustain the work we’d started in the Engaged360@Bath project and a second time for core funding in 2018. These cases were made using the ongoing monitoring and evaluation data we were collecting about our work such as the number of researchers supported with grant writing or attending training and placing them in the broader institutional and sector context.

We also commissioned an external evaluator to talk with all the people we’d directly engaged with over the years. This helped us understand and articulate our value, whilst also showing that our approach was working. It also added credibility to our claims.

Driver three: routine monitoring and evaluation

Now, nine years after we began, our programme is reasonably settled, it is a programme that emerged during the initial interventions of our culture change work. Using the model of complexity theory meant that while during the beginning phases of our work we couldn’t exactly predict what a culture of public engagement with research would look like, we can point at it now and say that 'this is what it looks like'. It also reassured us that the things that have stayed in place are robust and the things that have dropped by the wayside weren’t the right thing at the right time. We use a tool called Outnav to store the routine monitoring data we collect on our work as well as evaluation reports from our initiatives and it helps us produce reports. Outnav is based on a logic model approach so it works well for us.

So, how can we be confident about our culture of public engagement?

I mentioned the EDGE tool earlier. This is a self-assessment tool and we have progressed nicely along with it. An external measure like the EDGE tool is useful because it adds weight to our claims. However, there are other indicators. These are the things we couldn’t predict. Faculties and Departments creating new roles (or partial roles) with responsibility for public engagement. Colleagues across all job families being able to confidently talk about and share learning on public engagement in our absence and colleagues incorporating meaningful public engagement into research proposals and programmes.

Where are we now?

Culture change never ends. Yes, at the beginning there is a rapid, dramatic change, but a university is a dynamic organisation operating in a dynamic system, so we have to be constantly on the move and responsive to the organisational need. We can only do that by being perpetually vigilant and using evaluation and evidence to argue the case for change (or not) and keep aligning with strategic goals / influencing strategy development to include your agenda.

Being nine years on from our starting point and recognising change can take 10 years or so, we decided to reflect on our early approaches and produced a guide about culture change at the University of Bath.

This habit of writing about our learning and keeping and active blog has been a key part of our reflective practice which has also helped our profile with our peers.

Issues we’re thinking about at the moment

There are two issues we're currently thinking about at the minute:

  • EDGE – this monitoring tool implied that once you had moved from one end to the other, it was done – culture changed! Job done! Not the case, the next and current phase is about how to sustain that culture and how to evidence it.
  • How does our logic model reflect where we are now?

 Top tips

If you're tackling evaluating culture change here are some things that worked for us:

  • Start your planning with a logic model
  • Set up initiatives and embed reflective practice for everything, including with governance such as a steering group (see recommendations at end of project report)
  • Use reflective practice to ensure that you’re aligning with university strategy
  • Know your boundaries and be confident of what high-quality public engagement with research is and isn’t in your context
  • Watch out for mission creep - the nature of public engagement with research is quite ambiguous and you could find yourself heading off on a tangent
  • Keep track of stories as well as numbers
  • Do collective evaluation with colleagues for the more sensitive or contentious issues
  • Commission external evaluation for initiatives where you don’t have the capacity to work on OR feel a third party would get better data (as we did with our small Engage Grants)

Helen Featherstone is Head of Public Engagement at the University of Bath.

Posted in: Leading Public Engagement, Thinkpiece

Respond

  • (we won't publish this)

Write a response