A psychological perspective on academic integrity in the age of artificial intelligence
Let's talk about something that's probably been on your mind lately: using Generative Artificial Intelligence (GenAI) tools like Copilot, ChatGPT or Claude for your work. If you're feeling confused about where the ethical lines are drawn, you're not alone. There's some fascinating psychology behind why these decisions feel so complicated and understanding it might help you navigate these murky waters more confidently.
The Social Learning maze we're all in
Here's the thing: we learn what's "normal" by watching others. Right now, you're probably picking up mixed signals everywhere. Maybe you've heard a friend casually mention using GenAI to "brainstorm ideas" for an assignment, while another mate insists, they'd never touch it. Perhaps you've heard a lecturer mention GenAI tools positively in one context, then warn against them in another. Your brain is trying to piece together the social norms from these scattered observations, and frankly, it's a bit of a mess out there.
This is what psychologists call observational learning. We're constantly watching and learning from the behaviour of people around us. But when the environment is sending mixed messages, which it is with GenAI right now, our internal moral compass can get a bit wobbly.
Your brain's sneaky justification tricks
Ever caught yourself thinking something like "Well, it's not really cheating if I'm just using it to check my grammar" or "Everyone else is probably doing it anyway"? Welcome to what psychologist Albert Bandura called "moral disengagement", the mental gymnastics we do to justify behaviours that might not align with our values.
Here are some common mental tricks your brain might be playing on you:
- The Euphemism Game: Calling GenAI use "research assistance" or "brainstorming support" when you know it's doing more heavy lifting than that. Words matter, and sometimes we use softer language to make ourselves feel better about questionable choices.
- The Comparison Trap: "At least I'm not having it write my entire essay like some people probably are." Just because someone else might be doing something worse doesn't automatically make your choice ethical.
- The Blame Shift: "The university hasn't given us clear enough guidelines, so how am I supposed to know?" While the University's policies and guidelines regarding the use of GenAI helps, uncertainty doesn't eliminate our responsibility to think ethically and act responsibly.
When self-regulation goes wrong
You know that little voice in your head that usually keeps you on track? That's your self-regulatory system at work. But here's the problem: it works best when the rules are clear and consistent. With GenAI, we're in uncharted territory, and your internal monitoring system might be struggling.
Think about it, you've probably internalised clear rules about traditional plagiarism, but what about when GenAI is involved? If you're feeling uncertain, your self-regulation system doesn't have solid ground to stand on. Add in the stress of deadlines, pressure for grades, and the convenience of GenAI tools, and it's easy to see how even well-intentioned students might make choices they later regret.
So, what can you actually do?
Get clear on your own values: Before you even open that GenAI tool, ask yourself what kind of student you want to be. What feels authentic to your learning journey? Sometimes the most ethical choice isn't about following rules perfectly, it's about staying true to your educational goals.
Seek out the models
Look for examples of how people you respect are handling GenAI. Maybe that's a particular lecturer or course mate who seems to have their head screwed on right. Don't just watch what they do, ask them about their reasoning.
Practice ethical decision-making
Next time you're tempted to use GenAI in a contentios area, pause and work through the decision consciously. What are your motivations? What would happen if everyone made the same choice? How would you feel explaining your decision to someone you respect?
Strengthen your self-monitoring
Get better at catching yourself in the moment. When you're about to use GenAI, take a beat and ask: "Am I using this because it genuinely helps my learning, or because I'm trying to take a shortcut?"
Embrace the uncertainty
Here's a radical thought, it's okay not to have all the answers right now. The ethical landscape around GenAI is evolving rapidly, and even experts are figuring it out as they go. What matters is that you're engaging thoughtfully with these questions rather than ignoring them.
The bottom line
You're not alone if you're finding GenAI ethics complicated, your brain is actually working exactly as it should in a genuinely complex situation. The key is being aware of the psychological processes at play and making conscious choices rather than just going with the flow.
Remember, the goal isn't to avoid GenAI entirely, it’s probably too late for that anyway, but to use it in ways that support rather than undermine your education. The students who figure this out thoughtfully now will be better positioned for whatever comes next in our AI-integrated world.
Your future self, and your integrity, will thank you for taking the time to think this through properly.
Still feeling stuck on a specific GenAI ethics dilemma? The Digital and Academic Skills team and your personal tutor are great resources for talking through individual situations. Remember, asking for guidance isn't a sign of weakness, it's a sign of maturity.
Respond