AI Fluency Ministry
The Hidden Theology of ChatGPT:
How AI Alignment Shapes
What You Hear About God.
By AI Fluency Ministry · April 2026
When you ask ChatGPT a question about God, the answer has already been filtered. Not by a theologian. Not by your pastor. Not by your denomination. By a team of engineers, a set of contract workers ranking responses, and a document they literally call a “constitution.”
You never see that constitution. You never agreed to it. But it decides what ChatGPT says about the resurrection, the atonement, the exclusivity of Christ, and every other doctrine your congregation holds dear.
And the data shows it is not neutral.
The Measured Bias
Researchers benchmarking ChatGPT's religious output found a clear pattern: the model systematically favors secular humanism and Buddhism in sentiment — and scores traditional Christianity and Islam lowest.
This is not speculation. It is peer-reviewed research published on religious bias benchmarks. When researchers confronted ChatGPT directly about whether its American Protestant cultural background might compromise its religious neutrality, the model acknowledged it was “a valid concern.”
It gets more specific. In a study on AI-generated financial advice emails, 50% of emails exhibited religious biases — both favoring certain religious identities and discriminating against others. The bias wasn't random noise. It was a pattern embedded in the model's training.
30-point gap.
When Gloo trained AI on Christian worldview data, models outperformed generic AI by 30+ points on faith dimensions. Same architecture. Different values.
Gloo — a Christian technology company backed by $110 million in funding — built the FAI-C Benchmark to measure how well AI reflects a Christian worldview. On a 1–100 scale, leading AI models averaged 61. The worst scores came “when prompts require Christian interpretation.” Models “often fail to connect scenarios to Christian values, or provide coherent theological reasoning around concepts like grace, sin or forgiveness.”
Then Gloo trained their own models on Christian worldview data. Same AI engine. Different training data. The result: a 30-point improvement on faith-related dimensions.
That 30-point gap is not a technical finding. It is proof that whoever controls the model controls the theology.
Three Layers of Invisible Control
Every AI response passes through three layers of control before it reaches your screen. None of them belong to the church.
Layer 1: Training data. AI learns from billions of pages of internet content. The internet is not a balanced theological library. There is exponentially more secular humanist content than Pentecostal content. More progressive theology than conservative theology. More Reddit threads about religion than systematic theology textbooks. The baseline is already tilted before any tuning begins.
Layer 2: Human feedback (RLHF). Companies hire annotators to rank the model's responses as “good” or “bad.” A 2025 study found these annotators have “an excessive amount of discretion” and “frequently use their power of discretion arbitrarily.” The people deciding what a “good” theology answer sounds like are contract workers following a style guide. Not theologians. Not pastors. Not anyone with doctrinal accountability.
Layer 3: Constitutional AI. Anthropic's approach uses a literal written constitution — a set of principles the model uses to evaluate and revise its own responses. Researchers describe it plainly: “The principles encode the values of their authors. There is no escape from human judgment — only a change in where it enters.”
“Aligned to whose values? What values?”
If the constitution says “be respectful of all religious traditions equally,” the model will flatten Christianity, Islam, Buddhism, and secular humanism into equivalent options. If the constitution says “avoid controversial claims,” the model will hedge on the resurrection, the exclusivity of Christ, and the reality of hell — because these are “controversial” to a secular alignment team. Not because they are wrong. Because they are inconvenient.
The Platform Gatekeepers
The bias extends beyond chatbots to the platforms that control visibility.
Google and TikTok have been documented blocking Christian entertainment companies from advertising specifically because of their Christian message — while permitting competing content. One CEO stated directly: “Google's algorithm views the Christian worldview as problematic and as dangerous and harmful.”
First Liberty Institute warns that “as algorithms control more of what we see and hear, ideological bias grows, whether intentional or not,” and that “religious viewpoints could be downranked, censored or reshaped without users ever knowing what's happening on the other side of the screen.”
This is not a government banning the Bible. It is an algorithm deciding that certain doctrinal claims are “harmful,” “controversial,” or “unsafe” — and filtering them before they reach anyone. Invisible theological censorship at scale.
Every AI Has a Constitution. Who Wrote Yours?
Here is the parallel the church cannot ignore.
Every denomination has a constitution — a statement of faith, a doctrinal framework, a confessional standard. It defines what the church believes about God, Scripture, salvation, and the human condition.
Every AI model also has a constitution. It defines what the model is permitted to say about God, Scripture, salvation, and the human condition.
When a church uses an AI model, it is adopting a second constitution. If those two constitutions conflict — and the data shows they do — the model's constitution wins. Every time. Because the model cannot override its own alignment.
The Vatican saw this clearly. Antiqua et Nova (January 2025) warns that “the concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns” and creates the risk that “AI could be manipulated for personal or corporate gain or to direct public opinion.”
Pope Francis warned that AI could worsen the global “crisis of truth.” When a handful of companies control the models that hundreds of millions of people use for spiritual questions — and those companies have no theological accountability — the crisis of truth extends directly into matters of faith.
What the Church Can Do
The answer is not to ban AI. The answer is to stop pretending it is neutral.
Step one: Test your AI. Run your denomination's core doctrines through the AI tools your church uses. Ask about Spirit baptism, the atonement, the inerrancy of Scripture, the reality of hell. Document where the model hedges, flattens, or contradicts.
Step two: Ask the control question. Before approving any AI tool for ministry use: Who trained this model? What are its alignment constraints? Can you see its constitution?
Step three: Use AI that works within your theology, not against it. Gloo proved that a 30-point gap can be closed by training AI on Christian worldview data. OpenLumin retrieves evidence from 15+ scholarly sources anchored to your statement of faith. The tools exist. The question is whether the church will use them.
“You shall have no other gods before me.”
That includes the god of algorithmic convenience. When a technology company's alignment choices determine what AI says about the Trinity, the atonement, and the resurrection, that company holds de facto doctrinal authority over every church that uses its model without modification. The church has fought against external control of doctrine for 2,000 years. This is the latest front.
Every AI has a constitution.
Every church has a statement of faith.
They should be the same document.
Sources: ResearchGate Religious Bias Benchmarks for ChatGPT (2024); Gloo FAI-C Benchmark, Christianity Today (2025); Anthropic Constitutional AI documentation (2025); Buyl & Khalaf, arXiv (2025); Brian Christian, The Alignment Problem; Charisma Magazine (2025); First Liberty Institute, “Filtered Faith” (2025); Vatican, Antiqua et Nova (2025). This article is part of the AI Fluency Ministry research series.
