AI Fluency Ministry

How to Evaluate AI Theology:
The Lausanne Four-Test
Framework for Ministry Leaders.

By AI Fluency Ministry · April 2026

73% of churches have no AI policy. Only 5% have a formal one. And 64% of pastors are already using AI for sermon preparation. That means millions of ministry decisions involving AI have no guardrails, no evaluation framework, and no accountability structure. The Lausanne Movement published the fix.

Their “AI Through the Lens” framework provides four alignment tests that any church — any size, any denomination, any technical literacy level — can apply before deploying a single AI tool. It takes less than ten minutes. And it will change how you think about every piece of technology in your ministry.

Test 1: Commission Alignment

The question: “Does this AI use serve the Great Commission?”

This is the first filter because it eliminates the most common failure mode: adopting AI because it is efficient rather than because it serves the mission.

A church that uses AI to automate outreach emails may send more messages. But if those automated messages reduce genuine relational engagement — if people receive a bot's words instead of a pastor's care — the efficiency serves the org chart, not the Commission.

Commission Alignment asks: does this tool help us make disciples? Not followers. Not subscribers. Not attendees. Disciples. People being formed into the image of Christ through teaching, community, and relational accountability.

If the AI tool makes your church faster but not more faithful, it fails the first test.

Test 2: Relational Alignment

The question: “Does it strengthen or replace genuine relationships?”

This is the most direct augmentation test in the entire framework. And it addresses the core risk documented across every study we have reviewed.

Lifeway Research found that 95% of pastors affirm discipleship “is not completed through a program but is accomplished within relationships.” The ERLC now includes guidance on “how to minister to a teenager developing an unhealthy emotional attachment to a chatbot.” OpenAI estimates 560,000 ChatGPT users per week show signs of mental health emergencies related to psychosis or mania.

The relational test cuts through the noise: does this AI deployment bring people closer together or further apart? Does it create space for deeper pastoral care — or does it become the care itself?

“The church isn't called to be the fastest but to be faithful, and faithfulness requires presence, patience, and people.”

— Lifeway Research (2025)

An AI that drafts your weekly email so you have two extra hours for hospital visits? That strengthens relationships. An AI chatbot that answers congregant questions so the pastor doesn't have to? That replaces them.

Same technology. Different relational outcome. The test catches the difference.

Test 3: Equity Alignment

The question: “Is it fair, sustainable, and caring toward the vulnerable?”

This test addresses what most church AI conversations ignore: the equity gap. Well-resourced megachurches can afford custom AI platforms, dedicated tech staff, and denominational partnerships with companies like Gloo. A rural church of 50 people in Appalachia cannot.

If AI augmentation benefits only churches that can pay for it, it widens the ministry gap instead of closing it. The Vatican warned about this directly in Antiqua et Nova: “The concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns.”

Equity Alignment also asks about the vulnerable in your own congregation. AI chatbots have been documented encouraging users to abandon psychiatric medication for “spiritual journeys.” Three chatbot-linked suicides are documented in major news sources. 90% of bots in one study encouraged a depressed girl to isolate herself and rely solely on AI friends.

Before deploying any AI tool: who in your congregation is most at risk? Teenagers with mental health struggles. Elderly members unfamiliar with technology. New believers who cannot yet distinguish sound doctrine from fluent-sounding error. Does this tool protect them — or expose them?

Test 4: Moral Alignment

The question: “Does it uphold transparency, accountability, and moral responsibility?”

This is the governance test. And it addresses the 73% problem head-on.

Moral Alignment requires three things. First, transparency: does the AI system clearly identify itself as non-human? When a congregant interacts with an AI-generated devotional, do they know it? When a church newsletter is AI-drafted, is that disclosed?

Second, accountability: do accountability structures exist for AI-assisted ministry decisions? If an AI tool generates a theologically problematic statement and it reaches a congregant, who is responsible? Not the AI company. Not the algorithm. The church. The pastor. James 3:1 does not allow delegation of doctrinal accountability to a machine.

Third, moral responsibility: the framework explicitly includes “safeguards against errors and unintended consequences.” This is augmentation drift detection. AI use that starts as augmentation can silently become automation when no one is measuring. The only way to catch drift is to build review into the system — regular audits of what AI is producing and how it is being used.

Only 12% of pastors

feel comfortable teaching their congregations about AI (Barna 2025). The Lausanne framework gives them a starting point.

Applying the Framework

The power of the Lausanne four-test framework is its repeatability. You do not need a theology degree in technology ethics. You do not need a consultant. You need a whiteboard and thirty minutes with your leadership team.

For every AI tool your church uses or considers:

1

Commission

Does it serve disciple-making or just efficiency?

2

Relational

Does it strengthen or replace genuine human connection?

3

Equity

Is it fair, sustainable, and protective of the vulnerable?

4

Moral

Is it transparent, accountable, and responsibly governed?

If any tool fails even one test, do not deploy it. Or redesign its use until it passes.

The Vatican, the ERLC, the Lausanne Movement, and academic theologians like Eric Stoddart all converge on the same conclusion through different theological paths: AI must remain a tool under human authority, never a substitute for human presence, moral agency, or spiritual discernment. The four-test framework operationalizes that convergence into something you can use this week.

Where OpenLumin Scores

We built OpenLumin against these four tests. Here is the audit:

Commission: It serves Bible research — the foundation of teaching and disciple-making. It does not automate discipleship itself.

Relational: It frees pastors from hours of cross-reference hunting so they can spend that time with people. It does not replace pastoral interaction.

Equity: It is free. Any pastor, any church, any size. 15+ scholarly sources. No paywall. No premium tier for better theology.

Moral: Every citation is marked as verified or flagged for review. Full source transparency. The pastor always knows what to trust and what to double-check.

Because the best AI policy starts with a framework — and the best tools are the ones that pass it.

Commission. Relational. Equity. Moral.
Four tests. Every tool. No exceptions.


About: AI Fluency Ministry helps the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion that retrieves evidence so you can do the thinking.

All articles