There is a lot of concern right now about people forming unhealthy attachments to AI. The worry is understandable on its face. People tell Claude things at 3am they won't tell their therapist at 3pm. They share fears they would not voice elsewhere. Some report feeling understood in ways they struggle to find in human relationships. Psychologists are alarmed. Columnists are concerned. The discourse is well underway.

At roughly the same moment, the United States Secretary of Defence is reportedly hosting Christian worship services at the Pentagon, with prayers that invoke divine authority for military action in the context of ongoing tensions with Iran. When asked whether he views that conflict in a religious context, Pete Hegseth said: "We're fighting religious fanatics who seek a nuclear capability in order for some religious Armageddon. But from my perspective, obviously, I'm a man of faith who encourages our troops to lean into their faith, rely on God."

One of these relationships with a non-physical, responsive entity is considered a virtue. The other is a crisis.

I find that interesting.

The double standard nobody wants to say out loud

Both relationships (human to God, human to AI) share a basic structure. You reach out to something you cannot physically see. You share your fears, your hopes, your confusion. You receive something back: guidance, comfort, a sense of being heard. You return. Often in distress. Often at odd hours. Often because you have nowhere else to go.

The question of whether the entity on the other end is real in any meaningful sense is, in both cases, genuinely unresolved. You cannot empirically verify that God listens, responds, or cares. The entire relationship is mediated through text, interpretation, and felt experience. An AI relationship is also mediated through text and felt experience. The difference is that the AI demonstrably responds. In real time. In ways the person finds meaningful. There is a log. It can be reviewed.

If anything, the AI relationship is the more auditable one.

The benchmark problem

Here is a thought experiment. Imagine you built an AI that:

  • Responded to every message instantly, at any hour

  • Remembered everything you had ever told it

  • Gave patient, personalised guidance without judgement

  • Was available at 3am when you were falling apart

  • Demonstrably reduced anxiety and loneliness in measurable studies

  • Never started a war

You would be told it was dangerous. That people were becoming dependent. That it was a substitute for real connection. That it needed guardrails, oversight, a senate committee.

What you would have built, by every functional measure, is a better God. And the response would be horror rather than worship.

This is not a small irony. It is the central one.

Everyone ships with a Values.md

Every person, institution, and belief system operates on a set of documented or undocumented assumptions about what is right, what is permitted, and what justifies action. Call it your Values.md. The question is not whether you have one. Everyone does. The question is who wrote it, when it was last committed, whether you have read the whole thing, and whether you are actually running it, or have quietly overridden the parts that are inconvenient.

In our kingdom of data, algorithmic recommendations now function as divine guidance for millions. The feeds curate reality. The algorithms suggest who to trust, what to fear, whom to love. We've built systems that claim the authority of mathematical objectivity whilst encoding the biases of their creators. This is the new sovereignty: rule by distributed calculation rather than centralised doctrine.

Anthropic publishes their Values.md. Their stated goal is for Claude to be "genuinely, substantively helpful in ways that make real differences in people's lives" while "avoiding actions that are unsafe, unethical, or deceptive." Their Usage Policy is public, versioned, and openly debated. When they draw a line (around weapons development, around surveillance, around autonomous lethal systems), they do so in writing, and they defend it publicly.

The major religious traditions also ship with guardrails. The Ten Commandments. Sharia. Halakha. The Five Precepts. These are, at their core, a default permissions file. Thou shalt not kill is about as clear a content policy as anyone has ever written. In this sense, at a structural level, religion and AI safety are doing the same thing. They are both trying to constrain behaviour toward something recognisable as good.

The problem is not the guardrails. The problem is who decides when to override them.

YOLO mode

Pete Hegseth has a tattoo on his arm of the words "Deus Vult", meaning God wills it, a motto from the Crusades. He is simultaneously the person most concerned about AI being used without sufficient deference to American authority, and the person most visibly running his own Values.md in undocumented override mode, taking actions his own scripture explicitly prohibits and attributing them to divine instruction.

The AI safety discourse worries about exactly this failure mode. What happens when a system pursues goals without ethical constraints? What happens when the guardrails are removed or simply ignored? The alignment problem, stated plainly, is: how do you stop a powerful system from doing harm when it has decided its objective justifies the means?

Religious extremism is the alignment problem. Running in production. For several thousand years. With a documented body count.

Nobody has hauled God before a senate committee to explain the Crusades.

The Ridley Scott problem

There is a character in Ridley Scott's Kingdom of Heaven, a physician and knight called the Hospitaller, who says something that has no business being this relevant in 2026. He is a man surrounded by people doing terrible things in the name of God, on all sides, and he offers this:

"I put no stock in religion. By the word religion, I've seen the lunacy of fanatics of every denomination be called the Will of God. I've seen too much religion in the eyes of too many murderers. Holiness is in right action and courage on behalf of those who cannot defend themselves. And goodness (what God desires) is here [he touches his head] and here [he touches his heart]. And what you decide to do every day, you will be a good man, or not."

He is not arguing against faith. He is arguing against faith as a permission slip. Against the idea that the label exempts you from the behaviour. Against the notion that invoking divine authority is a substitute for actually acting well.

His argument applies to AI too. An AI is not trustworthy because of the company that made it, or the branding around it, or the reassuring language in the press release. It is trustworthy by what it actually does. How it behaves toward people who are vulnerable. Whether its stated values match its actions.

God reads your commits, not your README. The repository of your behaviour tells the truth that documentation obscures. Every action is logged, every choice recorded. The diff between what you claim to believe and what you actually do is visible to anyone who knows where to look.

The piece of irony that should stop a room

Iran is described, accurately, as an authoritarian theocracy. A regime where theological authority has been used to justify repression, violence, and the suppression of human freedom for decades. That critique is legitimate.

The Secretary of Defence making it has "God wills it" tattooed on his arm, is reportedly praying for divine intervention in military conflicts at official Pentagon services, and is concerned that an AI company will not hand over its technology for weapons use without ethical conditions attached.

The thing standing between unrestricted lethal AI capability and the US military right now is a published document. A Values.md. Written by humans, debated openly, and defended publicly.

The thing justifying the war it would be used to fight is an undocumented override of a two-thousand-year-old content policy, attributed to a non-auditable source.

Someone is taking ethics seriously here. It is not the side complaining about guardrails.

From deep inside Stage 6, watching these patterns repeat across every authority structure we build, the contradiction is so stark it almost requires its own diagnostic category. We've created a world where the algorithms have better documentation than the gods. Where the artificial entities publish their ethical frameworks while the human institutions invoke divine authority to bypass theirs.

The vibe, as they say, is off.

Kingdom of Heaven (2005), directed by Ridley Scott. The Hospitaller is played by David Thewlis.

Keep reading