AI Fluency Ministry

The Doctrinal Sovereignty Question:
Who Has Authority Over
What AI Says About God?

By AI Fluency Ministry · April 2026

Every church has a statement of faith. It defines what the church believes about God, about Scripture, about salvation. It is the doctrinal constitution. Pastors are accountable to it. Teachers are measured against it. It is the guardrail that keeps the church aligned with its convictions.

Every AI model also has a constitution. Literally. Some companies call it that — a set of written principles the model uses to evaluate and filter its own responses. Others embed it through training data, human feedback, and alignment constraints. But the effect is the same: a set of values, written by the model's creators, that determines what the AI says about God.

When your church uses an AI model, it is adopting a second constitution. And when those two constitutions conflict — your church's and the model's — the model's constitution wins. Every time. Because the model cannot override its own alignment.

A 2,000-Year Fight

The church has been fighting this battle for two millennia. The details change. The structure does not.

In the fourth century, Emperor Constantine and his successors tried to dictate doctrine through imperial councils. The church fought back — insisting that theological authority belonged to the body of Christ, not to the state.

In the medieval period, the question was whether secular rulers could appoint bishops and control church teaching. The Investiture Controversy consumed Europe for decades. The principle at stake: who controls doctrine?

In the Reformation, Luther nailed his theses to the door because an external institution — the papacy — claimed authority over what every Christian must believe. The response was sola Scriptura: Scripture alone, not external institutions, holds final doctrinal authority.

Today, the external institution is not an emperor or a pope. It is a technology company. And the mechanism of control is not a decree. It is an alignment algorithm.

“The concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns.”

— Vatican, Antiqua et Nova (2025)

How the Control Works

AI models do not produce neutral output. Every response has been shaped by three layers of control — and none of them belong to your church.

Training data. The model learned from billions of pages of internet content. The internet is not a balanced theological library. There is far more secular content than Pentecostal content. More progressive theology than conservative theology. More pop spirituality than systematic theology. The starting point is already tilted.

Human feedback. Companies hire annotators to rank the model's responses. A 2025 study found these annotators have “an excessive amount of discretion” and “frequently use their power of discretion arbitrarily.” These are the people deciding what a “good” theology answer looks like. Not theologians. Contract workers following a style guide.

The constitution. The principles that the model uses to evaluate its own output. If the constitution says “be respectful of all religious traditions equally,” the model will flatten Christianity, Islam, and secular humanism into equivalent options. If it says “avoid controversial claims,” the model will hedge on the resurrection, the exclusivity of Christ, and the reality of hell — because these are “controversial” to a secular alignment team.

Brian Christian, author of The Alignment Problem, asked the defining question: “Aligned to whose values? What values?”

The Evidence Is Measured

This is not theoretical. Researchers have measured the theological bias.

Benchmarks published on ResearchGate found ChatGPT systematically favors secular humanism and Buddhism while scoring traditional Christianity lowest in sentiment. When confronted about whether its American Protestant cultural background might compromise its religious neutrality, ChatGPT acknowledged this as “a valid concern.” In financial advice generation, 50% of ChatGPT-generated emails exhibited religious biases.

Gloo — a Christian tech company backed by $110 million in venture capital — built the first benchmark to measure how well AI models reflect a Christian worldview. On their FAI-C scale (1 to 100), leading AI models averaged 61. Models “often fail to connect scenarios to Christian values, or provide coherent theological reasoning around concepts like grace, sin, or forgiveness.”

Then Gloo trained their own models on Christian worldview data. Same AI architecture. Different training data. Different alignment constraints.

30-point gap.

Same engine. Different controller. Completely different theology.

That 30-point gap is the proof. Whoever controls the model controls the output. Whoever controls the output controls the theology. And right now, for 64% of pastors using AI for sermon prep, the controller is a company in Silicon Valley with no doctrinal accountability to anyone.

What Doctrinal Sovereignty Looks Like

Some organizations have recognized the problem and responded. Their approaches define a spectrum of doctrinal control.

Maximum control

Magisterium AI built a system trained exclusively on 27,000+ official Catholic documents — the Catechism, papal encyclicals, council decrees. Every response traces to an authoritative source. The training data contains nothing else. Doctrinal sovereignty is total.

High control

Gloo pulls a church's statement of faith off its website and uses it as an alignment constraint. The church's constitution becomes primary. AI responses are aligned with that denomination's doctrine, not Silicon Valley's defaults.

Default usage

ChatGPT with no customization. This is what 64% of pastors are doing. The model's constitution — written by an alignment team with no theological training — overrides your denomination's constitution on every response. You are trusting Silicon Valley's alignment choices with your congregation's theology.

The James 3:1 Principle

The AI and Faith organization published a warning that frames this in explicitly biblical terms. Their core argument: “When an LLM is used as a part of a spiritual practice, such as prayer or biblical interpretation, it will inform a user's understanding of God.”

That makes AI development a form of teaching. And James 3:1 applies: “We who teach will be judged more strictly.”

Since a computer cannot be held accountable, the moral weight falls on the humans who control the model. For the companies building generic AI: they bear the weight of every theological distortion their alignment choices produce. For the churches using that AI without modification: they bear the weight of trusting a tool they never evaluated against their own doctrine.

The article warns directly: “We do not need a new threat from false AI teachers. The risk is too great.”

The Religious Freedom Dimension

The stakes extend beyond individual churches. First Liberty warns that “as algorithms control more of what we see and hear, ideological bias grows, whether intentional or not” and that “religious viewpoints could be downranked, censored or reshaped without users ever knowing.”

Google and TikTok have been documented blocking Christian entertainment companies from advertising specifically because of their Christian message. One CEO stated: “Google's algorithm views the Christian worldview as problematic and as dangerous and harmful.”

A major academic study published in Taylor & Francis found that “much of the danger to human rights comes from AI's opaque development amongst the ‘Big Five’ (Facebook, Google, Microsoft, Amazon, Apple) which lacks external oversight.”

The church has historically fought against state control of doctrine. The new frontier is corporate control of doctrine through AI alignment. And unlike state control, this one is invisible. There is no decree to protest. There is no edict to appeal. There is only an algorithm that quietly shapes what your congregation hears about God — and you never see the shaping.

Reclaiming Sovereignty

Doctrinal sovereignty over AI is not optional. It is a pastoral responsibility.

The minimum step: before approving any AI tool for ministry, ask three questions. Who trained this model? What are its alignment constraints? Can we see its “constitution”? If the answer to any of these is “we don't know” — you are ceding doctrinal sovereignty to an entity you cannot evaluate.

The better step: use tools that make the church's doctrine primary. OpenLumin is built on this principle. It retrieves evidence from 15+ scholarly sources — Matthew Henry, John Gill, Jamieson-Fausset-Brown, Michael Heiser, Ancient Near East context, Theographic Bible data. Every claim is sourced and marked as verified or flagged. The AI does not generate theology. It retrieves evidence. You bring the theology. You bring the statement of faith. You draw the conclusions.

Your denomination fought for centuries to maintain authority over what is taught in its churches. Do not hand that authority to an alignment team that has never read your doctrinal statement.

Every church has a constitution.
Every AI model has one too.
When they conflict, the model wins — unless you choose tools where
your doctrine comes first.


About the author: AI Fluency Ministry is a project helping the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion where the church's doctrine stays primary. Based on the “Who Controls the Model Controls the Output” research, the Vatican's Antiqua et Nova, and the AI Fluency in Ministry research series.

All articles