How AI could help citizens’ assemblies make well-informed decisions

Posted in: AI, Data, politics and policy, Democracy and voter preference

Anna Mowbray is a former MSc Public Policy Student with the IPR. She works as Policy Manager at the Charities Aid Foundation and is blogging in a personal capacity. 

 

Since AI large language models (LLMs) burst into the public consciousness in late 2022, with the launch of an early version of ChatGPT, there has been a sustained interest in how these tools can assist human beings with a variety of tasks. One area that has received little attention to date is how such tools might be used by citizens’ assemblies, to support deliberative democracy.

LLMs might be used in a deliberative democratic setting in several ways: as an alternative to an expert (with participants asking questions of a model), in a facilitative role, or as a source of information, being used to produce briefing materials.

LLMs show substantial promise in this field, but any implementation must recognise and acknowledge the risks inherent in the use of these tools and take steps to counter them. This may mean working in partnership with a human and recognising the strengths and limitations of the tool.

My recent research has begun to test this by considering LLMs as a source of briefing materials for a citizens assembly, conducting “interviews” with one LLM, ChatGPT, and evaluating the quality of this output.

 

LLMs might help or harm deliberation

 

LLMs are particularly strong when it comes to summarisation, drawing attention to themes of relevance when given a broad topic, and presenting a high level balanced argument. In my experiments, ChatGPT created helpful briefings on assisted dying, summarising a range of ethical and legal considerations which a citizens’ assembly deliberating on the topic would need to consider.  It is also possible that an LLM might be used to help organisers structure a citizens assembly and determine the topics to be discussed at each session. Beyond this LLMs have a clear benefit in creating content in clear and accessible language, making deliberative democracy accessible to more of the public.

However, practitioners should be alert to specific weaknesses, which apply at least to the current generation of LLMs. When engaging in deliberation, participants typically rely on personal storytelling and sharing their own experiences. With this in mind it is a particular weakness of LLMs that they do not present case studies, specific examples, or personal testimonies when asked to present briefings. There is a risk that the provision of cold hard facts, unaccompanied by personal engagement, could restrict participants’ abilities to engage with others.

Furthermore, there is clear evidence that LLMs can be used to produce biased material. For instance, when asked to create a conspiratorial narrative about 15 minute cities, ChatGPT constructed an original but plausible narrative. ChatGPT’s “helpfulness” contributes to this bias, because it has a tendency to agree with the stance of the prompt it is given. However, it is more difficult to produce conspiratorial information or outright misinformation, as safeguards have been put in place.

My research has also found that the sources generated by ChatGPT were often false, referring to papers, articles or websites that did not exist. For example, when I asked ChatGPT to produce a list of suggested sources to review to help a citizens’ assembly to understand how their local area could reach net zero it cited: “- UK Government's Net Zero Strategy: The official strategy outlining the UK's approach to achieving net zero emissions by 2050.” Unfortunately, there is no such document.

 

What about truth?

 

The underlying models that power tools like ChatGPT carefully weight the content provided in the prompt to predict the best response. This makes them great at creating realistic sounding narratives. Yet ChatGPT is a stochastic parrot – though it produces plausible content, the tool does not know whether what it is producing is true. If “true” begins to mean “probable based on existing data”,  will this fundamentally change the meaning of citizens requesting and engaging with information?

Recent work on deliberative democracy has explored deliberative systems, suggesting that citizens debate and form opinions in the wider ecosystem of everyday life. This includes including formal deliberative forums, but also other places where citizens form their opinions: in the media, around the breakfast table, and conversations in public institutions, social movements, and NGOs. If AI becomes central to the way citizens form their opinions, then this could change the dynamics of inclusion, with particular perspectives more likely to be included in the output that citizens ingest. For example, as much of the internet is written in English, groups who speak other languages are less likely to have their unique concerns reflected in datasets that models are trained on. More broadly, what does getting your information from an AI model do to a person’s ability to empathise with their fellow citizens? How does it affect the democratic function of deliberative democracy – the value of including a range of varied perspectives?

Already there is a need for guidance on deliberative democracy to reflect the possibilities offered by new tools. For example, DemocracyNext’s useful guide to Assembling an Assembly currently features only brief discussion of digital tools. Future editions will surely need to consider how LLMs are used and integrated into deliberative democratic processes.

There remains much that needs to be understood. How might LLMs and other AI tools affect citizen engagement, and the functioning of democracy? How will they affect deliberative democratic practice? Practitioners could run experiments testing out LLMs as facilitators in deliberative settings, or even as expert witnesses. As this journey begins, it is critical that those using such tools understand their strengths and limitations, and use such tools alongside human knowledge and expertise.

All articles posted on this blog give the views of the author(s), and not the position of the IPR, nor of the University of Bath. Anna is writing in a personal capacity, and not as an employee of Charities Aid Foundation.

Posted in: AI, Data, politics and policy, Democracy and voter preference

Respond

  • (we won't publish this)

Write a response