Generative Artificial Intelligence (AI) and Academic Integrity - Part 2

Posted in: Academic Integrity, Artificial Intelligence, assessment, Digital skills, learning and teaching, learning technology, TEL

UPDATE:

On the 7 March, the CLT hosted a webinar on Generative AI as part of our EduTalk series. Colleagues from the Departments of Computer Science, Health and the School of Management presented on their initial work and findings using ChatGPT in the context of learning, teaching and assessment. Find our more and watch the recordings on our Teaching Hub.

NEW DEVELOPMENT:

On the 14 March, ChatGPT-4 was released, which includes some refinements (such as permitting up to 25K words as an input, and the ability to input images in the search). Currently, this is available either as a paid version via OpenAI (including their APIs), or via the Bing ChatGPT trial. Only the developer API version can be used, at present, to accept images.

Decorative image to represent artificial intelligence, generated by AI
Image generated by AI image generator, Craiyon.

Can we ban students from using AI?  

In a word: no. 

TL:DR: even if we could ban students use of generative AI entirely in assessments, this would not enable them to learn how to use such technologies effectively, ethically and transparently in their studies, and this might fail to prepare them for the future world of work.

The longer answer: when a new technology appears – or is thrust into the limelight in a new way – it is quite natural to be sceptical (after all, how many times have we been promised that technology will solve all our problems, only to end up generating new challenges to solve or being put at the back of our technological cupboard never to be played with again?) What is more, the UK government has banned engaging in Essay Mills and contract cheating and if ChatGPT and tools like it is essentially a glorified essay mill, and poses such a threat to academic integrity, should we ban generative AI too?  

Whilst tempting, it is precisely the ubiquity of such tools – and the likelihood that they will become even more ubiquitous in the future – both in the workplace and for learning and teaching – that makes banning ChatGPT and generative AI tools a zero-sum game. Moreover, as noted earlier, banning students from using such tools in assessment might also have unintended consequences. As the National Centre for AI argues: 

…on the face of it, this seems [simple], but then again there are problems with the detail.  As I write this in Microsoft Word, I have predictive text enabled, and AI is indeed generating some of the words that I’m writing (it just wrote the word ‘writing’ for me then!). 

I make extensive use of Grammarly and often utilize it to revise sentences, improving their readability and rectifying mistakes. Yes, that last sentence was rewritten for me by Grammarly.  It’s been generated by AI.  This is almost certainly not the kind of use case we actually want to prohibit though. 

A ban may achieve little in the short term and could mean we do our students a disservice by preventing them from accessing some of the benefits of these technologies, and training students to use such tools in a thoughtful way – critically evaluating the pros and cons for themselves and knowing when (and perhaps more importantly when not) to use them and for what purposes. We should also be mindful that AI can also help us to create more accessible content, and so in banning its use, we may inadvertently ban something that can make learning and teaching more accessible. 

Perhaps, then, our approach should focus on people and not on the technologies themselves. As JISC have recently advised, we may be best off focusing on clarifying acceptable student behaviours and understanding what drives them to cheat, and ensuring good assessment design, rather than specifying prohibited technologies which are continually evolving and being embedded into other tools. 

Further, as a recent article in the Times Higher notes, before we rush into action: ‘Nothing has really changed; academic integrity is difficult to police and always has been. If we want to deter cheating, the best way to do so is to remove the motivation for cheating in the first place’. In that sense, our first line of defence, as we advise on our Teaching Hub, is to double down on efforts on educating students to participate in good academic citizenship, signposting support and help, and by designing assessments that are authentic and embrace assessment for learning (which is already underway with CT).  Second, and to quote Michael Draper, professor in legal education at the University of Swansea, from the Guardian: ‘lf we’re preparing students for the outside world of work and if in the workplace this sort of technology is given to us, then I think we need to embrace it rather than ban it.’   

This viewpoint is shared by JISC, which provides network and IT services to higher education. To quote at length its Director, Michael Webb:  

While assistive computation tools like ChatGPT can undoubtedly be seen as presenting a challenge to the sector, they also have the potential to change it in really positive ways – by cutting staff workloads, for example, or enabling new assessment models.  

The fact that ChatGPT can generate properly structured, grammatically correct pieces means that students could well use it to produce essays. Equally, though, it could be used by educators to help them generate course content, reports and feedback.  

The knee-jerk reaction might be to block these tools in order to stop students cheating, but that’s neither feasible nor advisable. We should really regard them as simply the next step up from spelling or grammar checkers: technology that can make everyone’s life easier.  

Like it or not, AI-powered computation tools for written content, image generation and coding are here to stay. Aspects of them will soon be integrated into apps like Microsoft Office. The key is to understand their shortcomings and weak points as well as their strengths. We should all be aware, for example, that ChatGPT’s output can be poorly argued, out of date and factually inaccurate.  

We don’t need to revert to in-person exams: this is a great opportunity for the sector to explore new assessment techniques that measure learners on critical thinking, problem-solving and reasoning skills rather than essay-writing abilities. Factual knowledge can be assessed during the learning process, while the application of that knowledge could be tested in project work.  

On the last point, as we indicate on the CLT’s Teaching Hub page for Generative AI, despite the hype, the issues that ChatGPT might be seen to raise for assessment practice are not necessarily new to Higher Education. Responding to such challenges align with positive opportunities to transform why and how we carry out assessment, including: 

  • How we measure learners' critical thinking, problem-solving and reasoning skills rather than essay-writing abilities. 
  • How we shift from recall of knowledge to real world application. 
  • How we develop students who are able to reflect on their learning and performance. 

Finally, as Sam Illingworth, Associate Professor at Edinburgh Napier University suggests, ChatGPT, and tools like it, provides us with a wealth of opportunities to re-consider our approach, and:  

… one of the major challenges that ChatGPT presents is one I should be considering anyway: how can I make my assessments more authentic – meaningful, useful and relevant. Authentic assessments are designed to measure students’ knowledge and skills in a way that is particularly tailored to their own lives and future careers.  

Posted in: Academic Integrity, Artificial Intelligence, assessment, Digital skills, learning and teaching, learning technology, TEL

Respond

  • (we won't publish this)

Write a response