Emotional Intelligence Series
We’ve all heard of the Dunning-Kruger Effect — that funny, frustrating quirk of human psychology where the least skilled people overestimate their competence, while the truly skilled often underestimate theirs. But a new twist has entered the picture — and this time, it’s our interactions with artificial intelligence that are warping our self-awareness.
According to a fascinating study by researchers at Aalto University, when people use AI tools like ChatGPT, the traditional Dunning-Kruger curve flips on its head. Instead of the least capable people being the most overconfident, it’s the AI-literate — those who know their way around prompts and plugins — who fall hardest for what researchers call the Reverse Dunning-Kruger Trap.
In other words: the smarter you think you are with AI, the more likely you are to trust it too much.
The Great Overconfidence Experiment
The Aalto team ran two large experiments with around 500 participants, all tasked with solving tricky reasoning questions pulled from the LSAT (the notoriously difficult Law School Admission Test). Half of the participants used ChatGPT to help them; the other half worked solo.
Here’s the punchline: Those using AI did perform slightly better overall. But every single participant — AI expert or not — overestimated how well they did.
Even more surprisingly, people who identified as “AI-literate” were more overconfident than novices.
“We expected people who were AI-literate to judge their performance better — but this was not the case.”
— Robin Welsch, Professor who led the research
So while AI helped users produce more accurate answers, it also tricked them into thinking they were even more right than they actually were. Confidence soared. Accuracy? Not so much.
Cognitive Offloading: The Lazy Genius Problem
One of the biggest insights from this study is what psychologists call cognitive offloading — the tendency to hand over thinking to a tool or system because it feels easier or safer.
Think about it: You ask ChatGPT a complex question, it spits out a confident-sounding answer, and you move on. No second-guessing. No verification. No friction.
That’s exactly what most participants did.
“People rarely prompted ChatGPT more than once,” Welsch explained. “They simply copied the question, pasted it, and accepted the first answer.”
In short, users were not collaborating with AI — they were outsourcing thinking to it.
This behavior has two effects:
No. 1 — It reduces mental effort (which feels great in the short term).
No. 2 — It shrinks metacognition — our ability to reflect, self-correct, and evaluate whether we might be wrong.
Over time, this can lead to what researchers call “deskilling” — a gradual erosion of our analytical muscles. We get faster answers, but weaker minds.
The Reverse Dunning-Kruger Trap
Traditionally, the Dunning-Kruger Effect says: the less you know, the more you think you know.
But in the world of AI, the script flips.
Now, it’s those who think they know how to use AI — the ones confident in their prompt engineering, model comparisons, and tool fluency — who fall into the overconfidence trap.
This makes intuitive sense: when you believe you “get” how AI works, you’re more likely to trust its outputs. You think you can “spot” errors or bias — but that very confidence blinds you to subtle mistakes.
“AI literacy alone isn’t enough. We need platforms that foster metacognition — tools that help us question our own reasoning.”
— Daniela da Silva Fernandes
In other words, just being good at prompting doesn’t make you a critical thinker. In fact, it might make you less of one.
Why “AI Literacy” Isn’t the Whole Answer
We often hear that the solution to AI misuse is “AI literacy” — teaching people how these systems work. But this study shows that literacy can actually fuel overconfidence. Knowing the basics can give us just enough understanding to feel smart without actually being accurate.
It’s a psychological paradox: The more comfortable we become with AI, the less we question it.
That’s dangerous, because AI systems like ChatGPT speak in a tone of absolute certainty — even when they’re wrong. Research from Stanford found that users rated incorrect AI-generated answers as “more confident and credible” than correct human answers. In the human brain, confidence often equals credibility — and AI exploits that instinct brilliantly.
We don’t just believe the information; we believe the tone.
Why a Single Prompt Isn’t Enough
If there’s one takeaway from this research, it’s this: a single prompt isn’t a conversation — it’s a shortcut.
Metacognition — the ability to reflect on your own thinking — thrives on iteration. Every follow-up question, every “why,” every “what if” forces the brain to engage deeper cognitive processes. But when users treat AI like a vending machine (“insert prompt, receive answer”), that self-checking mechanism shuts down.
Professor Welsch suggests that even simple interventions could help — for example, requiring multiple prompts, encouraging reflection before final answers, or prompting users to justify their reasoning.
Imagine if ChatGPT asked:
“What makes you confident this is correct?”
or
“Would you like to see alternative perspectives before finalizing?”
Those small nudges could reignite the metacognitive spark — helping people think with AI, not just through it.
Why This Matters for the Future of Work
The implications stretch far beyond psychology labs. In workplaces everywhere, AI is quietly changing how we reason, write, and decide.
Marketing teams let AI draft copy without verifying claims. Students submit AI-written essays with factual errors. Analysts copy-paste ChatGPT outputs into reports without validation.
As a result, decision quality drops, even as confidence rises — a dangerous mix in corporate and policy settings. As one Harvard Business Review study put it, “AI is making people more confident, but not necessarily more correct.”
And that’s the heart of the problem: AI amplifies our certainty, not our accuracy.
How to Outsmart the Reverse Dunning-Kruger Trap
So what can we do about it?
Here are three strategies grounded in cognitive science and leadership psychology:
No. 1 — Build “Thinking Friction” Into Your Workflow
After getting an AI response, pause and ask:
- What assumption is this based on?
- What’s missing?
- How would I check this if I didn’t have AI?
These simple pauses keep your critical faculties sharp.
No. 2 — Use AI as a Thought Partner, Not a Replacement
The best users treat AI like a co-pilot — questioning, refining, and iterating together. Multiple prompts create better results and keep your metacognition alive.
No. 3 — Embrace Intellectual Humility
The smartest people aren’t those who know the most — they’re the ones who know when they might be wrong. Humility is the antidote to overconfidence, especially in the age of algorithmic certainty.
“The first rule of the Dunning-Kruger Club is: you don’t know you’re in it.”
— David Dunning (Psychologist)
And in the age of AI, we’re all members.
A New Kind of Intelligence
Ironically, AI’s rise isn’t just testing our intellectual intelligence — it’s testing our self-awareness. The next frontier of human evolution may not be faster reasoning, but smarter reflection.
The future belongs to those who can blend AI literacy with metacognitive mastery — people who use technology without losing touch with their own thinking.
Because when machines sound smarter than humans, humility becomes the new intelligence.
If You Liked This Article, You May Also Like …
- Emotional Bravery: How Brené Brown’s Wisdom Can Help You Master Emotional Intelligence
- The Heart of Leadership: Comparing 5 Iconic Leaders Through Emotional Intelligence
- How Emotionally Intelligent Leaders Use the 10-10-10 Rule to Make the Right Decisions

