AI Fluency Ministry

AI-Induced Overconfidence:
Why the More You Use AI,
the Worse You Think You Are at Spotting Errors.

By AI Fluency Ministry · April 2026

Here is what nobody told you about using AI for ministry: the more you use it, the more confident you become in your ability to catch its mistakes. And the more confident you become, the worse you actually are at catching them. This is not a theory. It is measured. And it has a name: AI-induced overconfidence. The research community is calling it the reverse Dunning-Kruger effect — and the people most at risk are not the novices. They are the power users.

The Aalto University Finding

Researchers at Aalto University in Finland published a study in Computers in Human Behavior (2026) that should concern every pastor using AI for sermon prep. They found that when participants used AI tools, their actual performance improved by 3 points. That is the good news.

The bad news: participants believed their performance improved by 7 points. They overestimated their improvement by 4 points — meaning the overestimation exceeded the actual gain.

+3 actual. +7 perceived.

The overconfidence gap is larger than the real improvement.

But the most alarming finding was this: “What's really surprising is that higher AI literacy brings more overconfidence,” said lead researcher Robin Welsch. The people who understood AI the best — who could explain how language models work, who knew the terminology, who considered themselves skilled users — showed the greatest gap between perceived and actual performance.

In other words, the classic Dunning-Kruger effect says unskilled people overestimate their abilities. The AI version flips it: skilled AI users overestimate their ability to catch AI errors. The expertise creates the blind spot.

46.1% of Incorrect Recommendations Followed

The overconfidence problem does not stay in the lab. A 2023 Springer meta-analysis found that 46.1% of incorrect AI recommendations were followed by professionals in experimental settings. Nearly half the time the AI was wrong, the human went along with it anyway.

And here is what makes it worse: offering explanations does not help. A 2025 follow-up study found that “offering varied explanation formats did not significantly improve users' ability to detect incorrect AI recommendations.” Showing people why the AI gave a particular answer did not make them better at recognizing when that answer was wrong. The overreliance persisted regardless of transparency.

Apply this to ministry. A pastor asks AI for background on a passage. The AI returns a confident, well-structured response with a theological claim that sounds right but is subtly wrong — perhaps conflating two Greek words, or attributing a position to a scholar who never held it, or presenting a minority interpretation as mainstream consensus. The pastor, confident in his ability to evaluate AI output, incorporates it into Sunday's sermon. The congregation hears it as authoritative. Nobody checks. The error becomes doctrine.

The AI Is Overconfident Too

The problem compounds because the AI itself is overconfident. MIT research (2025) found that when AI models hallucinate — when they fabricate facts — they are 34% more likely to use phrases like “definitely,” “certainly,” and “without doubt” compared to when they are providing accurate answers. The more wrong the AI is, the more certain it sounds.

Carnegie Mellon researchers documented this in LLMs directly: models get more overconfident after performing poorly, while humans typically adjust expectations downward. In one test, Google's Gemini predicted it would get 10 answers correct, actually scored 0.93, and then estimated it would get 14.4 correct next time. It failed, and became more confident.

A clinical AI study published in PMC put it starkly: LLMs “were nearly as confident when they were wrong as when they were right.” Post-training alignment pushes models to deploy certainty because human raters reward “clear and decisive” language. The system is literally trained to sound sure even when it should not be.

Microsoft's Warning: Less Critical Thinking

Microsoft Research surveyed 319 professionals across 936 real-world AI use cases and found a direct correlation: higher confidence in generative AI equals less critical thinking. The more a worker trusted the AI, the less they scrutinized its output. Workers “refrain from critical thinking when they lack the skills to inspect, improve, and guide AI-generated responses.”

The researchers documented a fundamental shift in cognitive effort: from information gathering to information verification, from problem-solving to AI response integration, from task execution to task stewardship. The nature of the work changes — but the new work (verification, evaluation, discernment) is harder than the old work. And most people are not trained for it.

“Higher confidence in GenAI correlates with less critical thinking. Higher self-confidence correlates with more critical thinking.”

— Microsoft Research, CHI 2025

Read that again. Confidence in AI decreases critical thinking. Confidence in yourself increases it. The solution is not to trust the AI more. It is to trust your own trained judgment — and to have judgment worth trusting.

What This Means for the Church

Gartner predicts that by 2026, atrophy of critical-thinking skills due to GenAI use will push 50% of global organizations to require “AI-free” skills assessments. The secular world is already recognizing the danger. The church should be ahead of this curve, not behind it.

If you are a pastor using AI for sermon preparation, the research says three things clearly:

1

Your confidence in catching AI errors is probably inflated

The Aalto data shows the overconfidence gap exceeds the actual performance gain. You are less sharp than you think.

2

Explanations and transparency do not solve the problem

Even when you can see the AI's reasoning, you are still likely to follow incorrect recommendations 46% of the time.

3

The AI sounds most certain when it is most wrong

MIT's 34% finding means the responses you trust most — the clear, decisive, confident ones — are the ones most likely to be fabricated.

The antidote is not less AI. It is better AI — tools that show you the evidence, mark what is verified and what is not, and force you to do the thinking instead of outsourcing it.

AI makes you feel smarter than it makes you.
The overconfidence gap is real. The errors are invisible.
The fix is evidence you can verify.

OpenLumin marks every citation as verified or training-assisted. You always know what you can trust — because trusting the AI is not enough.


About the author: AI Fluency Ministry is a research project helping the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion that retrieves evidence from 15+ scholarly sources so you do the thinking.

All articles