AI Fluency Ministry
The Jagged Frontier.
Why AI Helps Your Sermon Research but Harms Your Theological Discernment.
By AI Fluency Ministry · April 2026
In 2023, Harvard Business School partnered with Boston Consulting Group to run the largest controlled experiment on AI in professional work ever conducted. They gave 758 elite consultants access to GPT-4 and measured what happened.
The results were not what anyone expected. AI did not uniformly help. It did not uniformly hurt. It did both — depending entirely on the task. The researchers called it the “jagged technological frontier.”
Every pastor using AI for sermon prep needs to understand this concept. Because the frontier runs right through the middle of your weekly workflow.
Inside the Frontier: Where AI Excels
On tasks that fell inside AI's capabilities, the results were remarkable. Consultants using GPT-4 completed 12.2% more tasks, finished 25.1% faster, and produced work rated 40% higher in quality.
The lowest-performing consultants gained the most — their output improved by 43%. AI leveled the playing field upward. For routine analytical tasks, summarization, and data processing, AI was genuinely transformative.
For sermon prep, the equivalent tasks are clear: finding cross-references, summarizing commentary positions, generating discussion questions, locating historical background, organizing research notes. These are inside the frontier. AI handles them well. Often brilliantly.
Outside the Frontier: Where AI Harms
Here is where it gets dangerous. On tasks outside AI's capability boundary, consultants who used AI were 19% less likely to produce correct solutions than those who worked without it. On business problem-solving specifically, AI users performed 23% worse.
Not just “no improvement.” Actively worse. The AI made smart people dumber on hard problems.
19% worse.
On tasks outside AI's frontier, AI users produced fewer correct solutions than those working alone.
The most alarming finding: even participants who were explicitly warned about AI errors did not challenge the output. The researchers describe it as “mis-calibrated trust” — people trust AI in areas where it is incompetent and distrust it where it adds genuine value.
For sermon prep, the outside-the-frontier tasks are the ones that matter most: evaluating whether an AI-generated interpretation is doctrinally sound. Discerning whether a theological claim is pastorally appropriate for your specific congregation. Recognizing when a fluent, well-structured paragraph contains a subtle category error that would mislead your people.
These tasks require domain expertise. And the frontier is jagged — meaning there is no clean line between where AI helps and where it harms. The same tool, in the same session, will produce excellent research and dangerous theology. Back to back. Without warning.
The Impossible Backhand
Philipp Dubach tells a story about AI-generated sports footage. An AI-created tennis video looked flawless — perfect form, realistic movement, smooth camera work. Everyone who watched it was impressed.
Except one person. A professional tennis player. She watched the same footage and immediately said: “That backhand is impossible. No human wrist moves that way.”
The AI had generated a movement that was statistically plausible but physically impossible. It passed every test except the one administered by someone with deep domain knowledge.
“AI can get to the 95th or 98th percentile of creating something that looks perfect — but then it isn't, and if you have deep knowledge you can spot it immediately.”
Ministry has impossible backhands. A sermon illustration that sounds moving but misattributes a quote. An exegetical point that uses the right Greek word but applies the wrong semantic range. A theological claim that sounds orthodox but subtly confuses justification and sanctification in a way that would unravel under scrutiny.
AI will generate these errors with confidence. It will cite plausible sources. It will embed them in fluent, well-structured prose. And it will sound more certain when it is wrong — MIT research found that AI models use 34% more confident language when hallucinating than when stating facts.
Only deep theological training catches the impossible backhand. And that training cannot be outsourced to the same AI that generated the error.
The Mis-Calibrated Trust Problem
The Harvard/BCG study revealed something that should concern every pastor: “Professionals who had negative performance when using AI tended to blindly adopt its output and interrogate it less.”
This is not about lazy people. These were elite BCG consultants — highly educated, highly motivated, working under evaluation. Twenty-seven percent of them still defaulted to what the researchers call “Self-Automator behavior” — fully delegating to AI despite knowing their performance was being measured.
A systematic review of 35 studies (2015–2025) on automation bias confirmed the pattern: participants followed 80.1% of correct AI recommendations and 46.1% of incorrect ones. Nearly half of wrong recommendations were accepted. And providing AI's reasoning did not help — “offering varied explanation formats did not significantly improve users' ability to detect incorrect AI recommendations.”
The pull toward accepting AI output is not a character flaw. It is a cognitive default. Fighting it requires trained, deliberate resistance. It requires the pastor to have done his own study first — so he has something to compare the AI's output against.
Mapping the Ministry Frontier
Based on the research, here is where the jagged frontier falls in sermon preparation:
Inside the frontier
Finding cross-references. Summarizing commentary positions. Locating historical context. Generating discussion questions. Organizing research notes. Identifying parallel passages. Formatting study guides.
Outside the frontier
Evaluating doctrinal soundness. Discerning pastoral appropriateness. Detecting subtle theological errors. Applying the text to your specific congregation. Recognizing when a claim sounds orthodox but is not. Sensing the Spirit's leading on emphasis and application.
The tasks outside the frontier are the ones that define your calling. AI cannot do them. And the jagged frontier means you will never get a clean warning when AI crosses from the first category into the second.
The Practical Guardrail
The Gospel Coalition offers the clearest practical rule: “Begin Monday with prayerful reading of your text, allowing the Spirit to speak before engaging any tools.”
Do your own study first. Form your own convictions. Then use AI to expand your research — to find what you might have missed, to check your cross-references, to surface commentary positions you had not considered. That is augmentation. That is inside the frontier.
But never let AI do the thinking you were called to do. Because on the tasks that matter most — the tasks that define your ministry — AI does not just fail to help. It makes you worse.
The frontier is jagged. Know where it falls. And stay on the right side of it.
OpenLumin was built for the inside of the frontier.
It retrieves evidence — commentaries, cross-references, historical context.
The theological discernment stays with you.
About: AI Fluency Ministry is a project helping the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion that retrieves evidence from 15+ scholarly sources so pastors can study with depth and teach with confidence.
