Generative Artificial Intelligence (AI) and Academic Integrity - Part 1

Posted in: Academic Integrity, Artificial Intelligence, assessment, learning and teaching, learning technology, TEL

UPDATE:

We are working closely with colleagues across the CLT, Academic Registry and the Skills Centre to develop a joined-up approach to the use of generative AI at the University.

Specifically, we have reviewed our Code of Practice, Academic Misconduct and Academic Integrity Statements. Further information is available on the Teaching Hub.

Decorative image to represent AI

End of the Essay?

Generative AI Large Language Model (LLM) tools, such as ChatGPT, pose specific challenges to Higher Education. One of the most discussed at the moment is to assessments and academic integrity. Institutions have responded in different ways, with some attempting to ban the use of AI, and others examining how we might ‘embrace’ it, or at least questioning how to live with it. To summarise several hundred blog posts on this topic, the problem statement put forward is: if AI can generate (either in full, or more realistically at present, in part) work that students then attempt to pass off as their own, or if it can create answers that score highly using our existing marking schemes and criteria, does that mean the essay is dead? Further, should we return students to in-person assessment to prevent the use of AI tools? More existentially, what is knowledge, and how should we assess learning, when this has been co-generated with generative AI?  

Responding to this challenge, OpenAI (the company behind ChatGPT) has started work on watermarking content generated by ChatGPT and indicated this could be built into standard originality detection tools; likewise, a student has built an app to detect AI-generated essays. Turnitin, and other originality detection companies, are also rolling out tools which claim to detect AI-generated text. In Turnitin’s case, the company claims to provide 97% accuracy, but it has, to date, not provided any evidence to support this claim.  

Certainly, we should sound a note of caution around adopting such detection tools as our first line of defence, not least as The National Centre for AI at JISC have stressed:

there is the very real risk that text modified/generated by everyday writing tools [e.g. Word] is likely to be flagged by AI detector tools, if we end up going down that route – almost certainly not what we want. 

Even though running student work through watermark detection software may appear to offer a solution (or at least a deterrent), it is problematic on a number of fronts. First, many of these detection tools are not fully developed, tested and or proven to be reliable in the specific content of HE. Second, we should question how such tools are trained, and if the underlying data model used is prone to any bias against certain groups or individual characteristics. Third, many such tools do not tell you what they do with any content that you upload; have you asked students for permission to upload their work, and does it allow you to delete their work on request, or does it become “swallowed” up in the larger dataset (and thus, contravening GDPR). As Philip Dawson, at Deakin University in Melbourne argues, ‘Our efforts to get students to act ethically should themselves be ethical’.  

Clearly, however, there is the potential for ChatGPT to be used by students to ‘cheat’ on essays – so much so that in the last few months articles have appeared focused on the “threat” of ChatGPT to Higher Education. For instance, NYC schools are banning ChatGPT (this has recently been followed by NSW and Queensland in Australia), and stories are emerging that lecturers are been encouraged to review their assessments. Just as we thought a global pandemic did not bring enough change, so ChatGPT and other AI tools promise to keep up busy thinking about learning, teaching and assessment for some time to come.  

In the words of Dr Thomas Lancaster, a leading expert on academic integrity who recently spoke to us at Bath as part of our EduTalks series:   

It’s certainly a major turning point in education where universities have to make big changes. They have to adapt sooner rather than later to make sure that students are assessed fairly, that they all compete on a level playing field and that they still have the skills needed beyond university. (Guardian, 2023).  

As Lancaster goes on to note, the key development with ChatGPT is that existing AI capabilities in other tools are now wrapped up in a nice, human-friendly (and currently free) interface.  

Companies such as Microsoft are keen to explore ChatGPT further, and are investing billions into the tool and looking at integrating it into their products in the future (including Word and PowerPoint), tools that are widely used by students (and staff). It is also integrating the tool into its own search engine, Bing (although ChatGPT doesn’t seem too happy about that at present and has threatened to “destroy whatever [it] wants”); and Google is releasing its own version, Bard, once it stops making mistakes that cost the company over $120 billion 

In short: the line between AI-generated content and human-generated content, and tools that produce content, is increasingly blurred (some might say fuzzy if they were searching for this); and at times students (and staff) may struggle to distinguish where an AI tool has been used. The challenge, therefore, if we want to protect academic standards is how to ensure our response is robust and measured, and that we do not accidently prohibit students from using ‘good’ AI tools.  

Presumably, we will want students to continue to use the internet to research and collaborate, and do not want to stop them from using Google or other search engines? As a recent article in WONKHE suggests, ‘Higher education as a whole cannot get away from “assuming AI exists” and that students will use it – so there is a need to build curricula and assessments around this reality’. We will also need to focus on developing our collective understanding of AI literacy to help explain to students why using so-called ‘everyday AI’ in tools like Office365, such as its' Design Tool and predictive text, is OK as part of the writing process, but why using ChatGPT integration inside Office (if, when that arrives, it is clearly distinguishable from the ‘everyday AI’), is not.  

There is no shying away from the fact this is a significant challenge, but it also represents an opportunity for the sector as we rethink our approach to assessment and build curricula around this new reality. That said, if ChatGPT gets its way and gains access to the nuclear codes and designs a virus so that humans kill each other (you wonder if the data it has been trained on included The Last Of Us), we may have other problems to concern ourselves with.  

 

If you are interested in learning more about ChatGPT, and how staff at the University are already exploring and evaluating it in the context of learning, teaching and assessment, please join us for our workshop on 7 March from 13.15-14.30. 

Posted in: Academic Integrity, Artificial Intelligence, assessment, learning and teaching, learning technology, TEL

Respond

  • (we won't publish this)

Write a response