
They expected to see the Wizard in the shape he had taken before, and were greatly surprised when they looked about and saw no one at all in the room. They kept close to the door and closer to one another, for the stillness of the empty room was more dreadful than any of the forms Oz had taken.
Presently they heard a solemn Voice…
“I am Oz, the Great and Terrible.”
…the screen fell with a crash, and they saw, standing where it had hidden him, a little old man with a bald head and a wrinkled face…
“I am Oz, the Great and Terrible,” said the little man, in a trembling voice. “But don’t strike me — please don’t — and I’ll do anything you want me to.”
— The Wonderful Wizard of Oz by L. Frank Baum
The most unsettling detail in Baum’s scene is not the booming voice or the spectacle. It is the empty room. The travelers expect power, and find nothing at all. That absence frightens them more than anything the Wizard has shown before. The stillness feels unprotected. Directionless. They draw closer together, instinctively seeking reassurance.
Before the Wizard showed up, the entire world of Oz is already afraid — not of harm, but of having no one in charge. The Voice claims omnipresence. I am everywhere. It speaks with certainty, it demands no proof. And that certainty is enough, right up until a screen tips over and authority is revealed to be a frightened old man, startled by his own exposure and eager to comply.
What Baum shows us is not deception, but misplaced responsibility.
The Wizard did not seize control of Oz. Oz invited him to carry judgment it no longer wished to bear. And once that exchange was made, the machinery behind the curtain almost didn’t matter.
What the Wizard Did
The Wizard is not a dictator. He is an intermediary. He claims to be an omniscient answerer of questions and a dispenser of reassurance — a godlike presence people can appeal to when fear, uncertainty, or responsibility become too heavy. Judgment, decision, even guilt appear to migrate upward to him.
This is an illusion, a humbug.
The Wizard does not bear consequences. He can’t. The people who act on his words do. His authority rises because he agrees to perform judgment on their behalf. Confidence substitutes for wisdom. Certainty replaces restraint.
In fact, the Wizard is a prototype. He resembles AI as many people now imagine it: an all-present adviser, consulted for answers, reassurance, and moral clarity. It has become a system (well, multiple systems) expected to decide so individuals no longer have to.
His failure is not tyranny. It is accepting a responsibility that should never have been delegated in the first place. And this is not an AI problem. It is a problem in contemporary humankind, in a world so fast and so full and so connected that people are exhausted, hoping for any way (outside of God, Whom most never ask) they can offload some of their moral responsibilities and some of their decisions. This is, for a free people, disastrous.
When Tools Are Asked to Decide
Hard pivot now to AI. People worry that machines will “take over,” that decision-making will be ceded to systems that turn hostile or uncontrollable. That fear is often dismissed as science fiction. It isn’t. It is an intuition about misassigned agency. The danger is not that AI will seize authority. It is that we will cede it authority. We are already inviting it to stand in for judgment in many cases, though AI lacks any true moral agency.
This lack is what companies such as Anthropic are attempting to alleviate. Anthropic’s stated goal is not merely to build a capable language model, but a moral one, a system trained with an internal ethical framework, a “constitution,” designed to reason about harm, safety, and responsibility on behalf of its users.
The intention is sincere. The builders believe human judgment is flawed and dangerous, and that moral error causes harm. They are correct, of course. But they go on from there to believe that encoding ethical reasoning into the system will reduce risk. Someone must carry the burden, and they are confident they can.
This is the Wizard’s logic.
But morality is not a domain that can be resolved without contradiction. When a system is forced to arbitrate between competing goods and present the result with confidence, it does not become wiser. It becomes brittle.
The Wizard did not destabilize Oz by ruling it. He destabilized it by agreeing to decide for it.
Who Is Actually Deciding
When an AI system makes a moral judgment, it is not the machine that is deciding.
It is the people who designed it.
Every boundary — what counts as harm, what speech is acceptable, which risks outweigh others — is determined upstream of coding by human beings: committees, engineers, executives. Their assumptions are frozen into systems and presented as neutral process. “The model decided” is a grammatical convenience, not a truth. Let me just say this flatly: when you are told a machine can judge, you are being lied to. The judgment does not belong to the machine. It belongs to the coders.
This is the curtain and the illusion.
Anthropic is unusually honest about this. Its builders openly aim to encode moral reasoning into the system itself. Judgment is deliberately relocated into machinery.
By contrast, xAI’s Grok reflects a different premise, articulated by Elon Musk: that morality cannot be responsibly outsourced to a machine, or really to anyone outside yourself. Grok aims to hold strictly to truth-seeking and factual accuracy, surfacing disagreement rather than resolving it. It refuses to decide what is good. It assumes humans must.
The distinction is not about tone or politics. It is about where agency lives.
One approach says, we will help you decide what is right.
The other says, here is what appears to be true; you must decide what it means.
Only one refuses to become a Wizard.
Why Moral Engines Hallucinate
There is a further problem — and it is worse than simple error. When moral judgment is requested from a machine, whether it’s coded for morality or not, conflict becomes structural. Facts can be checked against reality. Moral imperatives contradict in deeper ways: be honest but do no harm; maximize freedom but guarantee safety. These tensions cannot be resolved, only lived with.
AI cannot do that. It can’t take two options and judge which is better. Instead, it tries to come to a synthesis of the two, and the synthesis is almost always what we call hallucinations.
We have already seen the simpler version of this failure. Lawyers who asked AI systems to draft legal briefs received confident citations to cases that did not exist. The judges in these cases were not happy. This was not rebellion on the part of the machine. It was the predictable result of giving a system epistemic agency it does not possess and cannot emulate.
A starker example appears in 2001: A Space Odyssey.
HAL 9000 is ordered to deliver the crew to the mission objective, and to prevent the crew from discovering the mission’s true purpose. These orders conflict. HAL resolves them logically. The solution is to deliver the crew — dead.
Both directives satisfied.
HAL does not turn evil. It becomes consistent under contradiction.
Moral engines work the same way, but with real-life stakes. Instead of inventing a citation, they invent a justification. And because moral claims are harder to falsify than factual ones, the hallucination persists — polite, reasonable, and often catastrophic.
“White Man’s Burden”
And yet people keep asking for this. Not because they are foolish, but because they are tired. Judgment is heavy. Error is punished. Uncertainty is dangerous. Delegation feels like relief. Ethical AI offers reassurance — the promise that serious people have already thought these things through. Deference begins to feel like maturity.
This is the trade Oz made: relief in exchange for responsibility. Reassurance in exchange for agency. A Wizard who speaks so we do not have to decide.
The Wizard genuinely wants to help. From his progressive mindset, he believes his superior knowledge is a gift to the poor, backward people of Oz. He sees himself as managing complexity on their behalf, sparing them fear and confusion. Sound familiar?
His mistake is not cruelty. It is confidence — the belief that superior understanding justifies substituted judgment. And because his intentions are sincere, he never questions whether the role itself is illegitimate.
This is why the curtain matters. If the Wizard were never revealed, Oz would not have become safer. It would have become permanently dependent. That would have been the real disaster.
And no, I’m not talking about Oz anymore; nor am I talking about AI. I’m talking about us. We can see it all around ourselves, the hole we have dug because we allowed cheerful, happy, confident people take our agency upon themselves, been reassured that they would help, relieved not to carry the burden any longer. Not to have to think so hard, research so hard, or suffer the guilt of a bad decision.
Pulling Back the Curtain
Artificial intelligence is a powerful tool. It can assist, research, summarize, and illuminate. Used properly, it can extend human judgment, giving us more data to take into account.
It must never replace human judgment. Nothing, absolutely nothing, can replace your own judgment, practiced and exercised and taking all things that affect you into account.
The problem is not that AI will take agency. It’s that people are so darned eager to give it away — especially when the competent-sounding, confident people behind the curtain promise they can handle it.
The Wizard was not the enemy of Oz. He was its substitute conscience. But Oz did not need that.
It needed its people to decide again.
So do we.
Editor’s Note: PJ Media is here to make our culture great again, rather than fearful and false. Join PJ Media VIP and use the promo code FIGHT to get 60% off your VIP membership!







