AI Fluency Ministry

Automation Bias:
Why Smart Pastors Still
Trust Wrong AI Answers.

By AI Fluency Ministry · April 2026

A radiologist with twenty years of experience looks at a scan. The AI highlights an area. The radiologist agrees. The AI is wrong. And now a patient has a missed diagnosis. This is not a hypothetical. It happened. And a 2024 RSNA study measured exactly how badly: when AI provided incorrect recommendations with visual explanations, physician accuracy dropped to 23.6%.

The explanations didn't help doctors catch the error. The explanations made doctors more trusting.

That finding should terrify every pastor using AI for sermon prep. Because what happened to those radiologists is not a medical problem. It is a human problem. And it has a name: automation bias.

What Automation Bias Actually Is

Automation bias is the documented tendency to favor suggestions from automated systems — even when those suggestions are wrong, and even when you have the expertise to know better.

It is not laziness. It is not carelessness. It is a cognitive default hardwired into how the human brain processes information. When a machine presents an answer confidently and clearly, your brain treats it the way it treats authority: with deference.

A systematic review of 35 studies published in Springer (2025) found that accuracy dropped significantly for all professional groups when AI provided incorrect recommendations. Not just novices. Not just the distracted. All groups. The bias does not discriminate by intelligence, training, or intention.

46.1%

of incorrect AI recommendations were followed by professionals in experimental studies (Springer 2023).
Nearly half of wrong answers — accepted without challenge.

Three Mechanisms That Make Pastors Vulnerable

The research identifies three forces that drive automation bias. Each one maps directly onto pastoral ministry.

Mechanism one: efficiency pressure. Questioning AI output takes effort. When you're preparing a sermon under time pressure — and the exegetical note AI generated is well-structured, well-cited, and plausible — the cognitive cost of verifying it competes with every other demand on your week. Under workload pressure, the brain defaults to letting the machine “think” for you. This is cognitive offloading, and it is a measurable, studied phenomenon.

Mechanism two: the persuasiveness of AI. AI output is formatted with confidence. It uses authoritative language, clear structure, and the tone of certainty. A Springer study described it plainly: the “inherent persuasiveness creates a false sense of expertise.” The output looks like it came from someone who knows what they're talking about. That appearance is enough to bypass your critical filters.

Mechanism three: the confidence inversion. Here is where it gets dangerous. MIT researchers found that AI models use 34% more confident language when hallucinating than when stating verified facts — phrases like “definitely,” “certainly,” “without a doubt.” The model sounds most sure when it is most wrong. Your brain reads that confidence as competence. And you move on.

“Offering varied explanation formats did not significantly improve users’ ability to detect incorrect AI recommendations.”

— Springer, Systematic Review of 35 Studies (2025)

Explanations Do Not Fix This

The most sobering finding in the research is that knowing why AI said something wrong does not prevent people from accepting it.

Researchers tested whether providing AI's reasoning — so-called “explainable AI” — would help users catch errors. It didn't. The same Springer review concluded that offering explanations “did not significantly improve users' ability to detect incorrect AI recommendations.”

In the RSNA radiology study, AI with visual explanations — highlighting exactly where and why it flagged an area — made doctors worse. The explanation didn't trigger skepticism. It triggered trust. Doctors saw that the AI “had a reason” and deferred faster.

Apply this to ministry. When an AI tool provides a theological claim and cites a commentary — “According to Matthew Henry, Paul's argument in Romans 9 demonstrates...” — the citation feels like verification. But the citation may be fabricated. The argument may be distorted. And the pastor, seeing what looks like sourced reasoning, moves on. The explanation didn't protect against error. It accelerated trust in error.

Even Experts Who Know Better Still Defer

The Harvard/BCG study of 758 consultants revealed something disturbing: 27% of highly trained professionals fully delegated their work to AI — despite knowing they were being evaluated on accuracy. These were not careless people. They were consultants at one of the world's most selective firms. And more than a quarter of them handed the keys to AI anyway.

The researchers described the phenomenon as “mis-calibrated trust” — people trust AI in areas where it is incompetent and distrust it where it actually adds value. Even participants who were explicitly warned about wrong answers did not challenge AI output.

If BCG consultants — people whose entire career is analytical rigor — cannot consistently catch wrong AI output, the expectation that busy pastors will catch wrong theological output is not realistic. It is wishful thinking dressed up as policy.

What This Means for Sermon Prep

64% of pastors now use AI for sermon preparation. Most of them believe they are using it as a tool — researching, checking references, finding illustrations. But automation bias means the tool is subtly reshaping the theology without the pastor noticing.

Here is the sequence that automation bias creates:

1

AI generates an exegetical insight

It sounds authoritative. It cites a commentary. The language is confident.

2

The pastor reads it under time pressure

The cognitive cost of independent verification competes with a dozen other tasks.

3

The brain defaults to trust

Automation bias kicks in. The output looks right. The citation looks real. The pastor moves on.

4

The error enters the sermon

The congregation hears it as the pastor's conviction. Nobody knows it originated from a model that was 34% more confident because it was hallucinating.

James 3:1 warns: “Not many of you should become teachers, my brothers, for you know that we who teach will be judged more strictly.” Automation bias means the judgment is not just on what you teach intentionally — it is on what you teach by default, because you trusted a machine that sounded certain.

The Only Defense: Verified Evidence, Not Generated Opinion

Automation bias cannot be solved by willpower. The research proves that. Warnings don't work. Explanations don't work. Even knowing the bias exists does not reliably prevent it.

What does work is changing what the AI gives you. If the AI generates theological opinions, you will trust them. That is the bias. But if the AI retrieves verified evidence — primary source commentaries, original language data, historical context, cross-references — and marks every claim as either verified or flagged for review, the dynamic changes. You are no longer trusting an AI opinion. You are reading source material and drawing your own conclusions.

That is why OpenLumin is built as a research companion, not a theology generator. Every claim is sourced. Every citation is marked as verified from evidence data or flagged as training-assisted. The AI retrieves. You think. The bias has less surface area to operate on — because there is no AI opinion to defer to.

Automation bias is not a character flaw. It is a cognitive default.
The question is not whether your pastors are vulnerable.
The question is whether your tools are designed to protect them.


About the author: AI Fluency Ministry is a project helping the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion that retrieves verified evidence so pastors can do the thinking. Based on research from 35+ academic studies on automation bias, the Harvard/BCG frontier study, and the AI Fluency in Ministry research series.

All articles