Demistif-AI: Reducing the digital divide using deliberative engagement

Posted in: Doing Public Engagement, Engage Grants

Discover what happened when Andy Barnes used his Engage Grants funding to engage older adults with artificial intelligence. 

Hi everyone, I’m Dr Andy Barnes, a lecturer in artificial intelligence (AI) in the Department of Computer Science here at the University and I conduct research into the operational aspects of AI. This work includes topics such as ethical and social consequences of AI as well as technical topics such as monitoring and maintenance strategies for AI systems. 

Demystifying Demystif-AI 

With funding from the Public Engagement Unit, I developed the Demystif-AI project. Through this work, I was interested in engaging a group that are potentially overlooked in activities about AI, older adults over 65. I wanted to create a space for this group to come together to learn, discuss and develop their opinions on the use and applications of AI in the real world. To do this, myself and colleagues decided to work with local University of the Third Age (U3A) groups, which are groups of mainly retired individuals interested in learning new things.

We delivered workshops to this group that provided an overview of how AI works, where participants may encounter (or have encountered AI), the direction AI is going in, and a chance to have hands-on experience with two unique AI tools (ChatGPT and DALL-E).

Through this work we aimed to give participants the space to explore this often overwhelming topic in a safe space, developing their own thoughts and opinions on AI and most importantly, to discuss and deliberate these views with others. 

Engagement approach

Inspired by work from the UK's leading public participation organisation Involve (https://involve.org.uk/resource/deliberative-workshop), we decided to take a deliberative approach to engage this group. This involved facilitating opportunities for developing, discussing and challenging opinions regarding AI. To do this, we employed the three steps highlighted by Involve: 

  1. Information Sharing 
  2. Developing Understanding 
  3. Public Deliberation 

The first involved traditional information sharing, an overview of AI, how it works and some real-world applications presented by the team. To address the second, developing understanding, we used two guided practical activities using two AI tools (ChatGPT (text generator) and DALL-E (text to image generator)). Participants worked in groups to complete a series of tasks and then to explore the tools as they saw fit. Finally, in addressing step three, public deliberation, after each practical task, we facilitated a larger group discussion on their findings, thoughts and opinions of the tools.  

The image below gives an example of some of the images the participants generated using DALL-E. 

Three images generated by the AI programme. DAll-E. The first picture shows an aeroplane made of scrap metal, the second an animated fish in a glass bowl and the third a hand-drawn map of Bath in Roman times.

What did we learn? 

Our learning during this project was twofold. We first made the following observations regarding the participants' relationship with AI: 

  1. Primary concerns were raised over the lack of AI legislation. 
  2. The understanding that AI is dependent solely on the data fed to it was a learning curve for a majority of participants. 
  3. The (mostly unethical) way in which the data is often curated for these online tools was often unknown to participants. 
  4. There was great interest in the discussion of ethics and the big questions facing AI in the future. 
  5. On a positive note many participants felt better equipped to identify AI in the future. 
  6. 95% of participants felt more comfortable discussing the topic of AI following the workshop. 
  7. 85% of participants have since developed new opinions of AI in society. 

The second set of lessons learnt revolves around the approach we used: 

  1. Splitting the session using the ‘three steps’ provided by Involve aided greatly in determining the right spread of time. 
  2. Providing extra time for discussion and questions was greatly appreciated by the participants. 
  3. Using a practical, hands-on experience was vital in enabling the development of independent understanding. This gave participants the agency to develop and ask questions to team members through experimenting with AI. 
  4. Providing the space for discussion in small groups enabled those participants who weren’t confident enough to speak in front of the whole group to still have discussions and debate within their small groups.

The benefits of these sessions were not limited to the project aims or participant aims but the student volunteers also benefited from:

  1. The ability to have discussions with people outside the ‘tech sphere’ opened them up to new views. 
  2. The chance to practice and evolve their communication and interpersonal skills. 

What’s next? 

We now understand what topics the older adults find concerning and interesting relating to AI so we can redefine our objectives to target these areas specifically for demystification, deliberation or another activity suggested by the community (please do reach out!). 

Further, identifying a suitable approach to these discussions (using deliberative engagement) now opens up various opportunities to work with other underrepresented groups in the ever-changing digital world. Specifically, the team are now looking at the digital divide in coastal communities, hoping to understand whether the boom in AI technology is helping or hindering this divide. So, watch this space; we might have more to say in the future! 

The Demystif-AI project was funded by a Public Engagement Unit Engage Grant 2022/2023. 

Rob Cooper is a Public Engagement Officer in the Public Engagement Unit.

Posted in: Doing Public Engagement, Engage Grants

Respond

  • (we won't publish this)

Write a response