ChatGPT came to global attention in December 2022, having being launched in November by OpenAI - a research laboratory based in San Francisco (USA). OpenAI has a general mission to build safe Artificial General Intelligence (AGI), by which it means autonomous systems which 'outperform humans at most economically valuable work— [and] benefits all of humanity.'

The news is ablaze with stories ranging from ChatGPT heralding an existential threat to academic integrity, to the future world of work (where white-collar workers will be replaced by AI overlords), that or the rise of The Terminator. It has even provided relationship advice, and in one example, was used with another tool to create a virtual AI wife. So what is it: is ChatGPT to be welcomed with open arms, or feared as the enemy at our door - perhaps quite literally if you have a Ring doorbell?

We set out below a general introduction to the tool (including a basic guide to Artificial Intelligence, and some of ChatGPT's main limitations).

What is ChatGPT?

In the words of ChatGPT, ChatGPT is a large language model developed by OpenAI. It is trained on a massive amount of text data and is designed to generate human-like text. As a large language model (LLM), ChatGPT has a number of capabilities that allow it to understand and respond to human language in a natural and conversational way. These capabilities include:

  • Natural Language Understanding (NLU): ChatGPT can understand the meaning behind human language inputs and extract important information from them.
  • Language Generation: it can generate human-like text in response to given prompts.
  • Text Completion: it can complete text prompts by generating text based on the given context.
  • Language Translation: it can translate text from one language to another.
  • Text Summarization: it can generate a summary of a given text document.
  • Text Classification: it can classify text into predefined categories.
  • Text generation: it can generate text that is coherent, grammatically correct and semantically similar to the input text or prompt.
  • Dialogue systems: it can be integrated into conversational systems, like chatbots or virtual assistants, to provide natural and human-like interactions.

What is Artificial Intelligence?

We have all probably interacted with a chatbot of some kind in the last few years – normally they appear as pop-up assistants on websites, responding to frequently asked questions. Some of you may even remember Microsoft’s Clippy (although fear not, the friendly paperclip is back)! In the context of our own homes, Apple's Siri, Amazon's Alexa or Google's Assistant, are all waiting and listening for a wake word in order to help purchase items, play songs, turn on lights or set the kitchen timer.

What these tools have in common is that most are powered by Artificial Intelligence (AI). While not every chatbot is powered by AI – which may go some way to explaining their varying levels of helpfulness as customer service agents – and not everyone feels comfortable welcoming Amazon or Google into their bedroom – AI is increasingly found all around us, not least in many websites and software we use every day. For instance, if you have used the ‘design ideas’ function in PowerPoint, or Microsoft Editor in other Office365 tools, then you will have been using AI to create designs, check spelling and grammar, summarise and improve the accessibility and readability of text and images. Indeed, such software has been available for several years, and is widely used in the workplace in a variety of settings, such as robotics, healthcare and, increasingly, in Higher Education.

AI itself is a contested term, but essentially it is the simulation of human intelligence processes by machines, especially computer systems. Further, when AI is talked about, what is often being referred to is one component of AI – machine learning. Machine learning is a subset of AI that involves the development of algorithms and statistical models that enable computers to learn from data, without being explicitly programmed. The goal of machine learning is to develop models that can make predictions or take actions based on input data.

Machine learning algorithms can be applied to a wide range of problems such as image recognition, natural language processing, predictive analytics, and decision making.


While not a contender for the next Booker Prize (yet), ChatGPT is already good enough for students to use in their writing process by providing suggestions for sentences, paragraphs, essay plans, feedback on drafts and even producing entire scripts; it can also generate basic code – although it reassuringly informs us that it can not teach a cat to code Python. As this following resource makes clear, its uses are seemingly endless, and it can be a fairly good German teacher to boot (it claims to be available in 95 languages currently).

This is not to say that ChatGPT is perfect – indeed it has some key known limitations (and like humans, it is not always 100% correct, even if it confidently assumes it is most of the time). Furthermore, and this is something the tool currently does not tell you – in order to ensure its results are not harmful or distressing, the underlying data set is trained. It has been reported that Openai outsourced this to workers in Kenya – paid $2 a hour – to sift out deeply unsavoury content from the darkest parts of the internet, so much so that, in one instance, a worker has been left mentally scarred by what they have viewed.

As a large language model, ChatGPT does not understand the world or connections between the things it says. It may be trained to recall things, but it does not understand them. To quote a recent post, it is essentially a ‘bullshitter’, and not to be trusted. That is not to say it is telling lies, per se, or is trying to deceive you, but simply it does not know what it is telling you. To be an effective tool in Higher Education, it too requires human beings, trained in spotting high-quality ‘bullshit’, to best assess its usefulness as a research and writing tool. This is where we, as educators, can step in.

It is worth adding, too, that ChatGPT is only trained – at the point of writing – on data up to 2021 (although some articles have suggested it seems to be reporting accurately on recent events as the model is continually refined). It also cannot produce non-text responses, make predictions about the future, nor browse or summarise information on the (live) internet. It also struggles to link concepts and ideas between topics for which relatively little data exists – in other words – in and of itself, it cannot generate new ideas.

Further Information and resources are available and in development on our Teaching Hub Pages. This page is a work in progress and will be kept up-to-date to provide information, guidance, and resources to support our understanding and usage of AI-tools, such as ChatGPT, in learning, teaching and assessment. It is aligned with both existing Curriculum Transformation principles and the four priority areas set out in the University's assessment and feedback action plan.

Posted in: Academic Integrity, Artificial Intelligence, assessment, authentic learning, Curriculum development, Digital skills, education, innovation, learning and teaching, learning technology, online assessment, teaching, technology enhanced learning, TEL

Further Information and Resources


  • (we won't publish this)

Write a response