With the uptake of artificial intelligence continuing at an ever-increasing pace, Dr Akhil Bhardwaj explains why we need to be wary of taking humans out of the loop.
Artificial intelligence (AI) is increasingly woven into decisions across society, from businesses and workplaces to government services and policies. In a recently published article, I urge caution – not because AI isn’t powerful, but because its powers are fundamentally different from human reasoning in ways that matter deeply for democracy, fairness, and accountability.
Unlike humans, who can learn from sparse evidence, notice analogies, or sense when rules break down, AI relies on patterns found within its training data. At its best, this allows AI to make sense of large datasets – spotting fraud, recognising faces, or even diagnosing diseases. But this same reliance on past data means it can also reinforce hidden biases or fail spectacularly on ‘edge cases’ that weren’t in the data.
For example, an AI hiring tool trained on historical resumes at a male-dominated company might learn, wrongly, that being male is a job qualification – even if gender is never explicitly mentioned.
The article draws an important line between ‘reliable’ and ‘frontier’ AI. Reliable AI refers to tried-and-true technologies that have been tested in well-understood circumstances, like sorting mail or basic language translation. These are often safe to use, though even here oversight is wise.
Frontier AI, such as new large language models that generate text or assist in complex decision-making, behave in much less predictable ways. Their inner workings are opaque, and their potential impacts remain largely untested.
Philosophical limits
AI’s limitations are not only technical but deeply philosophical. It cannot truly ‘understand’ or reason through metaphors, causal links, or unique, real-world situations – abilities that humans use every day. Abductive reasoning, the ability to form creative, plausible guesses, is something AI can ‘mimic’ but not genuinely perform. AI can often mimic human conversation or make predictions, but it lacks real common sense, contextual awareness, and the impetus to ask ‘why?’ in novel situations.
Another risk is the ‘feedback loop’ effect: if AI models make decisions about people (for example, in policing or loan approval) based on biased data, those decisions feed back into the data pool, amplifying discrimination or injustice. The article points to painful real-world consequences, like the Dutch welfare scandal, where AI flagged thousands of innocent families as potential fraudsters, ruining lives.
Given these limits, practical, action-oriented frameworks for responsible AI adoption are needed. For critical public sector uses – such as policing, healthcare, or social services – deploying untested or opaque AI should only happen with stringent human oversight, transparency, and a focus on potential harms (‘human in the loop’). In business, reliable AI can yield efficiency gains, but even here, close monitoring is essential, especially where legal or reputational risks exist.
Inadequate replacement
The article also warns against seeing AI as a replacement for people in creative or leadership roles. While AI can support and extend human capability, it is no substitute for empathy, ethical reflection, or real-world judgment. AI’s strengths lie in augmenting human decision-making, not replacing it outright.
In sum, the promise of AI is real, but so are its limitations. Only by acknowledging both can we use it as a force for human betterment, rather than as a blind, unaccountable authority.
Responsible AI means using both our optimism and our humility: trusting AI to help reveal patterns we’d miss, but always trusting ourselves to ask the right questions, apply values, and take responsibility for decisions that matter.
Respond