Laura Smyth is a part-time researcher and PhD student working for the Centre for People-led Digitalisation and based at the University of Bath. The Centre for People-led Digitalisation is dedicated to creating a needs-driven processes to support industry in realising the potential of a people-led approach to digitalisation. Laura’s research is focused on examining adult (digital) skills policy designed and implemented within England since 1997. This includes analysis of historic and contemporary skills polices and initiatives available across England and exploration of the factors that have affected skills policy outcomes across regions. If you are interested in Laura’s research, please contact ls2507@bath.ac.uk. Alternatively, if you would like to hear more about the Centre for People-led Digitalisation please contact p-ld@bth.ac.uk.
ChatGPT, a chatbot developed using OpenAI's advanced deep learning-based language models, has gained significant popularity since its launch in November 2022, not least among academics and students in Higher Education (HE). This surge in adoption is reflective of a broader trend over the past five years, during which AI usage in higher education by both academics and students has been steadily increasing.
Given the potential of AI technologies to "extend human capabilities and possibilities of teaching, learning, and research" when appropriately applied (Popenici and Kerr, 2017, p.3), the adoption of ChatGPT in HE may serve as a tool that can enhance educational practices. Nevertheless, the rise in adoption of AI technologies has also spotlighted concerns about the potential misuse of ChatGPT in HE. The capability of chatbots like ChatGPT to produce essays, data, and assignments has raised significant concerns regarding the threat AI poses to academic integrity. There is therefore an urgent need to understand the challenges and opportunities posed by AI in HE.
The Evolution and Capabilities of AI
The definition of AI has evolved significantly since Alan Turing's description of intelligent reasoning in machines in the 1950s. Today, AI is defined as "computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction, and the use of data for complex processing tasks" (Popenici and Kerr, 2017, p. 2). This definition encompasses ChatGPT, a generative AI language model that employs deep learning to generate new textual content by analysing patterns in existing data
ChatGPT is also an AI chatbot. Chatbots are interactive digital systems capable of automating conversations by imitating human dialogue. Many chatbots are rule-based and trigger pre-written responses through keywords and language identifiers rather than using AI. This where ChatGPT and AI chatbots differ. AI chatbots can execute a wide array of language tasks with little to no task-specific training, enabling them to generate essays, respond to questions, and assist with a variety of other tasks.
Use of ChatGPT in Higher Education
ChatGPT has demonstrated its ability in handling complex tasks, such as mimicking the writing styles of specific authors, or automating labour-intensive data wrangling tasks, like transforming dates, units, or names into different formats within a dataset. Additionally, the ability of ChatGPT to aid in tasks such as designing exams, sourcing teaching materials, and generating lesson plans underscores its potential to streamline various administrative duties in HE. Due to its vast potential in both research and admin related tasks, academic interest in ChatGPT has been high and steadily increasing since its debut. Concurrently, a growing body of literature on ChatGPT is beginning to explore the potential implications, opportunities, and risks of AI usage in higher education, particularly in relation to teaching and learning practices.
Potential Risks of ChatGPT misuse in Higher Education
However, the ability of ChatGPT to generate coherent text outputs has raised concerns about potential misuse by students. For instance, students could use ChatGPT to write essays, potentially achieving higher grades with less effort and less knowledge. Concerns have been raised that students' use of ChatGPT student in HE could lead to several issues, including a lack of communication skill development, limited critical thinking abilities, reduced innovative thinking, and insufficient understanding of course content. Thus, by relying on ChatGPT, HE students may miss the opportunity to develop essential skills.
Additionally, ChatGPT is not infallible and can produce erroneous and biased information. Students who depend on ChatGPT without exercising critical thinking risk accepting and spreading incorrect or biased information. Consequently, widespread reliance on ChatGPT by students could jeopardize the credibility of higher education if it compromises students' academic integrity and learning outcomes.
While the issue of student cheating in HE predates ChatGPT, ChatGPT has added a new layer of complexity for institutions to manage. Early studies have indicated there may be adverse associations between the utilization of ChatGPT and academic integrity, as well as between ChatGPT use and traits such as honesty, humility, and conscientiousness. The decision by numerous universities to ban ChatGPT, is reflective of concerns about the potential risks and negative implications associated with HE students using ChatGPT.
Nevertheless, enforcing these bans may prove difficult as successfully detecting ChatGPT use is not guaranteed, and innocent students may feel unjustly targeted. One recent survey found that 75% of students who reported using ChatGPT, would continue to use it regardless of whether their professors or universities implemented a ban. It is therefore unlikely that the use of restrictive measures in HE will resolve the risk of illicit ChatGPT use by students.
The Positive Potential of ChatGPT
Conversely, some researchers have argued that AI is not necessarily driving an increase in cheating incidents. Rather, evidence suggests that ChatGPT can significantly enhance students' learning experiences by streamlining research and study processes. For example, ChatGPT can aid in brainstorming ideas, proofreading, creating study guides, and clarifying complex concepts. Notably, 47% of students surveyed in autumn 2023 reported a positive effect on their studies due to ChatGPT, contrasting sharply with just 22% of faculty.
The differing perceptions between students and faculty highlight a potential gap in understanding how ChatGPT can be used effectively, while safeguarding educational values. Indeed, ignoring the potential of ChatGPT as a learning tool may reduce the likelihood of the development of innovative educational practices, innovations that could mitigate lower quality service provision in HE due to massification.
Introducing ChatGPT into HE is akin to opening Pandora's box, as its effects—both beneficial and detrimental—are likely to emerge in ways that are difficult to predict or manage. However, by proactively addressing these challenges and leveraging its capabilities thoughtfully, HE institutions could maximize the positive impact of this transformative technology.
All articles posted on this blog give the views of the author(s), and not the position of the IPR, nor of the University of Bath.
Responses
The vital considerations here are two-fold:
1) "Given the potential of AI technologies to "extend human capabilities and possibilities of teaching, learning, and research" when appropriately applied (Popenici and Kerr, 2017, p.3)" - WHEN APPROPRIATELY APPLIED.
2) The development of AI technologies has no requirement for ethical or moral considerations, both of which represent a serious threat to society at large.
Thank you for your comment Tim.
Certainly, it cannot be guaranteed that AI will be "appropriately applied". Indeed, there have been, and will continue to be many examples of misuse. While this is not unusual for new technologies, actions to limit the opportunities for illicit usage should be explored thoroughly, in addition to exploration of AI’s potential positive uses.
Additionally, as you highlight, there are no requirements for ethical or moral considerations in AI development. This does not just impact the potential outcomes of AI but it's creation - the web scrapping used to build AI like ChatGPT without seeking permissions has raised potential ethical and legal considerations.
Hopefully, the increasing accessibility and use of AI will prompt larger discussions of the need to introduce these ethical and moral considerations, due to the immediacy of its impact on individuals and society.
If AI is a topic you are interested in, I would recommend connecting with PhD student Archana Raghavan, who is completing her PhD with the School of Management at the University of Bath.