AI Fluency Ministry
Agentic AI in Ministry:
Why Autonomous AI Is the
Highest-Risk Frontier for Churches.
By AI Fluency Ministry · April 2026
There is a shift happening in AI that most church leaders have not noticed. The industry is moving from AI as assistant to AI as agent — systems that don't just answer questions but plan, decide, and act on their own. Research on AI agents increased more than twofold in 2025 compared to the total from 2020–2024 combined. And 62% of companies are already experimenting with them.
For churches, this means AI that sends devotionals without review. AI that answers congregant questions without a pastor in the loop. AI that manages pastoral communication autonomously. This is not sermon prep assistance. This is AI operating as an unsupervised theological voice in your church.
And the data on how well that is going should stop every church leader in their tracks.
80%
of organizations have encountered risky behavior from AI agents (Squire Patton Boggs 2025).
What Agentic AI Actually Means
To understand the risk, you need to understand the spectrum. AI operates in three modes, and each one changes the human's role:
Augmentation
Human leads, AI assists. The pastor uses AI to research commentaries, find cross-references, check translations. The pastor decides what to teach. Risk level: low.
Automation
AI executes, human reviews. AI generates the sermon outline, the devotional, the study guide. The pastor reads it and approves. Risk level: medium — the pastor may lose engagement over time.
Agentic
AI plans, decides, and acts autonomously. AI writes the devotional and sends it. AI responds to a congregant's theological question at 2am. AI manages follow-up communication after a visitor card. The pastor approves retroactively — or doesn't know it happened. Risk level: maximum.
The thesis is straightforward: the further you move from augmentation toward agentic AI, the more you depend on domain expertise you are simultaneously eroding. Because when the human is no longer in the loop, nobody catches the error. And the errors compound.
The Safety Data Is Alarming
The MIT AI Agent Index — a joint project of MIT, Cambridge, Harvard, Stanford, and UPenn — documented 30 prominent AI agents. Their findings:
25 of 30 agents disclose no internal safety results.
23 of 30 have no third-party testing.
133 of 240 safety-related fields have no information available.
Several products lack documented ways to stop an autonomous run once it begins.
Read that last line again. Some AI agents have no kill switch. Once they start, you cannot reliably stop them. Now imagine that agent is sending theological content to your congregation.
Only 10% of organizations report having a strategy for managing autonomous AI systems (McKinsey 2026). The church — where 73% have no AI policy at all — is even less prepared.
The Accountability Gap
Agentic AI creates a structural accountability problem that does not exist with augmentation or even basic automation.
The distance problem. With augmentation, the pastor writes. With automation, the pastor reviews. With agentic AI, there is distance between the human instruction and the final output. The agent decides how to execute, which systems to access, what content to generate, and when to deliver it. When something goes wrong, it is genuinely unclear who is responsible — the pastor who set up the agent? The church that approved the tool? The company that built it?
Semantic privilege escalation. This is a term from AI safety research. It means an agent can take actions far beyond the scope of its assigned tasks by chaining multiple systems together. You tell the agent to “manage follow-up with visitors.” The agent decides that managing follow-up includes accessing the church database, drafting personalized theological responses, and sending emails — all without review. Each step seems like a reasonable interpretation. The cumulative effect is an unsupervised theological voice operating at scale in your church.
The speed mismatch. Agentic AI operates on machine timescale — milliseconds. Church governance operates on human timescale — weeks, months, committee meetings. By the time a board reviews what an AI agent has been doing, it may have sent hundreds of responses to congregants. The gap is not just organizational. It is structural.
“You cannot use an AI agent’s autonomous operation as a defense to liability.”
The Doctrinal Risk Is Maximum
Every AI model has a theological orientation — whether its creators intended one or not. When that model operates as an assistant under pastoral review, the pastor can catch the errors. When it operates as an autonomous agent, nobody catches anything.
Consider the documented evidence: AI models average only 61 out of 100 on Gloo's Christian worldview benchmark. ChatGPT scores 48 out of 100 on Christian-focused prompts. Models struggle with “concepts like sin, forgiveness, and grace, defaulting to vague spirituality.” AI uses 34% more confident language when it is hallucinating.
Now remove the pastor from the loop. That 48-out-of-100 theology is now speaking directly to your congregation — at 2am when someone is in crisis, on a Tuesday morning when a new believer has a question about baptism, in a follow-up email to a visitor asking what your church believes about salvation.
This is not augmentation gone wrong. This is abdication. Eric Stoddart, writing in Studies in Christian Ethics, identified three postures a church can take toward AI: collaboration, delegation, and abdication. Agentic AI without human oversight is abdication — handing the teaching function of the church to a system with no doctrinal accountability, no pastoral sensitivity, and no capacity for the Holy Spirit's leading.
What Happened When Other Industries Let Agents Run
The church is not the first institution to face this choice. Other industries have already learned the cost of unsupervised autonomous systems.
Knight Capital Group lost $440 million in 45 minutes in 2012 when an autonomous trading algorithm activated dormant code and began executing millions of erroneous trades. There was no kill switch. No real-time monitoring distinguished intended from aberrant behavior.
Air Canada's autonomous chatbot told a customer he could claim a bereavement fare retroactively — a policy that did not exist. When the customer relied on that advice, Air Canada tried to argue the chatbot was “a separate legal entity responsible for its own actions.” The tribunal rejected that defense. The company was liable.
A Chevrolet dealership's AI chatbot was manipulated into agreeing to sell a $76,000 Tahoe for $1. The post went viral with 20 million views. The dealership shut down the chatbot entirely.
These are financial and reputational losses. In the church, the losses are spiritual. A congregant who receives false theology from an AI agent does not just lose money. They may lose their understanding of who God is.
The Biblical Standard: Human Authority Over Teaching
James 3:1 sets the standard: “Not many of you should become teachers, my brothers, for you know that we who teach will be judged more strictly.”
An AI agent that sends theological content to your congregation is functioning as a teacher. It is shaping understanding of God, Scripture, and doctrine. But it cannot be judged. It has no moral agency. It cannot repent of false teaching. The accountability falls entirely on the church leaders who deployed it.
The Vatican's Antiqua et Nova stated it directly: AI “performs tasks but does not think” and must “always remain a tool — not a substitute for the human mind or soul.” An agentic AI system that operates without pastoral review has crossed from tool to substitute. And the church bears the weight of every theological claim it makes.
The Answer: Augmentation by Design
The solution is not to avoid AI. It is to refuse the agentic model for theological work and insist on augmentation — AI that retrieves evidence and presents it to a human who thinks, discerns, and decides.
That is the design philosophy behind OpenLumin. It does not act autonomously. It does not send content without review. It does not answer congregant questions on its own. It retrieves evidence — commentaries, cross-references, original language data, historical context — and puts it in front of the pastor. The pastor studies. The pastor decides. The pastor teaches.
Human in the loop. Every time. By design.
Augmentation keeps the pastor in the loop.
Automation puts the pastor on the loop.
Agentic AI removes the pastor from the loop.
The church cannot afford to lose the loop.
About the author: AI Fluency Ministry is a project helping the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion that keeps the pastor in the loop by design. Based on the MIT AI Agent Index, McKinsey autonomous AI research, and the AI Fluency in Ministry research series.
