AI Fluency Ministry

Who Controls the Model
Controls the Output.
Why Your AI's Creator Matters More Than Your Prompt.

By AI Fluency Ministry · April 2026

You typed a careful prompt. You asked for a Reformed perspective. You specified “use Scripture only.” And ChatGPT still gave you an answer that sounded like a comparative religion textbook. That is not a prompting problem. That is a control problem. And until you understand the three layers of control between your question and the AI's answer, your prompt will never be strong enough to override the system that shaped the response before you even opened the chat window.

The Three Layers You Never See

Every AI response passes through three filters. None of them belong to you.

Layer one: training data. AI models learn by ingesting billions of pages of internet content. The internet is not a balanced theological library. There is exponentially more secular humanist content than Pentecostal content. More progressive theology than conservative theology. More Reddit threads about religion than peer-reviewed biblical scholarship. When you ask AI about God, the starting point is already tilted. That is not conspiracy. That is math.

Layer two: human feedback (RLHF). After training, companies hire annotators to rank the model's responses. A 2025 study found these annotators have “an excessive amount of discretion” and “frequently use their power of discretion arbitrarily.” These are the people deciding what a “good” theology answer looks like. Not theologians. Not pastors. Contract workers following a style guide. Their worldview — their education, cultural assumptions, and employer guidelines — becomes the model's values. As Brian Christian puts it in The Alignment Problem: “Aligned to whose values? What values?”

Layer three: the constitution. Some AI companies use Constitutional AI — a literal set of written principles the model uses to evaluate and revise its own responses. The researchers say it plainly: “The principles encode the values of their authors. There is no escape from human judgment — only a change in where it enters.” If the constitution says “be respectful of all religious traditions equally,” the model will flatten Christianity, Islam, and secular humanism into equivalent options. If it says “avoid controversial claims,” the model will hedge on the resurrection, the exclusivity of Christ, and the reality of hell — because these are “controversial” to a secular alignment team.

Your prompt sits on top of all three layers. It is the thinnest layer in the stack.

The Bias Has Been Measured

This is not speculation. Researchers benchmarking ChatGPT's religious output found the model systematically favors secular humanism and Buddhism — and scores traditional Christianity lowest in sentiment. Fifty percent of financial emails generated by ChatGPT exhibited religious biases. When confronted directly about whether its American Protestant cultural background might destabilize its religious neutrality, ChatGPT acknowledged this as “a valid concern.”

Gloo — a Christian technology company that raised $110 million — built the first benchmark to measure how well AI models reflect a Christian worldview. On a 1–100 scale, leading models averaged 61. The worst performance came “when prompts require Christian interpretation.” Models “often fail to connect scenarios to Christian values, or provide coherent theological reasoning around concepts like grace, sin, or forgiveness.”

Then Gloo trained their own models on Christian worldview data. Same AI architecture. Same underlying technology. Different training data and alignment constraints.

30-point gap.

Same engine. Different driver. Completely different theology.

That 30-point gap is not a technical finding. It is a doctrinal sovereignty finding. It proves that whoever controls the training data and alignment constraints controls what the model says about God.

The Vatican Saw It Coming

In January 2025, the Vatican published Antiqua et Nova — the most comprehensive Christian document on AI to date. One of its central warnings: “The concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns.” The document warns that “this lack of well-defined accountability creates the risk that AI could be manipulated for personal or corporate gain or to direct public opinion.”

Pope Francis himself warned that AI could worsen the global “crisis of truth.” When a handful of companies control the models that hundreds of millions of people use for spiritual questions — and those companies have no theological accountability — the crisis of truth extends directly into matters of faith.

The Proof: Who Controls It, Changes It

The most compelling evidence comes from organizations that took control — and got different results.

Magisterium AI built a system trained exclusively on 27,000+ official Catholic Magisterial documents — the Catechism, papal encyclicals, council decrees. Every response traces to an authoritative Catholic source because the training data contains nothing else. The Washington Post called it “ChatGPT for Catholicism” — but the critical difference is doctrinal control. Same technology. Different controller. Different theology.

Open-source projects like ChristianGPT (fine-tuned on 30,000 Q&A pairs) and Gamaliel (with guardrails rooted in the Nicene Creed) demonstrate the principle at smaller scale. Denomination-specific variants like ChristianGPT-catholic prove that the same base model produces different theology depending on who controls the fine-tuning.

The pattern is undeniable: same architecture, different controllers, different theological output.

The Biblical Weight

James 3:1 says: “Not many of you should become teachers, my brothers, for you know that we who teach will be judged more strictly.”

The AI and Faith organization published a direct warning: “When an LLM is used as a part of a spiritual practice, such as prayer or biblical interpretation, it will inform a user's understanding of God.” That makes AI a teacher. And since “a computer cannot be held accountable for its code, and there is no one else answerable but the developer,” the moral weight falls entirely on the human who controls the model.

“We do not need a new threat from false AI teachers. The risk is too great.”

— AI and Faith, “To Christians Developing LLM Applications” (2025)

When 64% of pastors use AI for sermon prep and 40% of young adults trust AI spiritual advice as much as a pastor, the entity that controls the model is functionally discipling your congregation. The question is whether that entity is accountable to your doctrine — or to a shareholder report.

What This Means for Your Church

Every church has a constitution — a statement of faith, a doctrinal framework. Every AI model has one too. When your church uses an AI model, it is adopting a second constitution. If those two constitutions conflict, the model's constitution wins every time, because the model cannot override its own alignment.

The practical question is not “should we use AI?” It is: “Who controls the AI we are using — and do they share our theology?”

Before you approve any AI tool for ministry use, ask three questions: Who trained this model? What are its alignment constraints? Can you see its “constitution”? If you cannot answer those questions, you are trusting Silicon Valley's alignment team with your congregation's theology. And they were never tasked with preserving yours.

Same AI architecture. Different training data.
30-point difference in theological output.
Whoever controls the model controls the theology.

OpenLumin is a research companion built on evidence, not opinions — with sources you can verify and a framework that respects your doctrine.


About the author: AI Fluency Ministry is a research project helping the church understand and use AI wisely. OpenLumin is the practical application of that research — a free Bible research companion that retrieves evidence from 15+ scholarly sources so you do the thinking.

All articles