If you’ve ever stepped onto an escalator that wasn’t moving, you already understand the core failure mode of modern AI systems.
You can see it’s stationary.
You know it’s stationary.
And yet your body still leans forward, preparing for motion that never comes.
That moment—brief, automatic, and surprising—is not a lapse in judgment. It’s a predictable outcome of a system optimized for speed over verification.
The same pattern shows up, in a different domain, when large language models produce fluent but incorrect answers. And the escalator analogy becomes especially powerful when we start talking about deterministic lineage and grounded truth.
The broken escalator phenomenon happens because your motor system has a learned lineage:
That lineage is deterministic. Given the same prior exposures, the same response occurs.
What fails is not prediction—but grounding. The motor action is no longer anchored to the current physical state of the world. The system executes a historically valid response in a context where it no longer applies.
Crucially, the brain does not pause to ask:
“Is this prediction still grounded in reality?”
It assumes that the lineage is sufficient.
Large language models behave in an analogous way.
They do not “know” facts in the sense of storing or retrieving grounded truth. Instead, they operate over probabilistic lineages learned during training.
Those lineages encode patterns such as:
From the model’s perspective, generating a response is not an act of verification or checking against reality. It is an act of probabilistic continuation along a learned trajectory.
When an answer is hallucinated, what is happening is not imagination or creativity. It is the execution of a learned probabilistic lineage without a grounding anchor.
The model steps forward because, historically, the escalator moved.
In traditional deterministic systems, lineage is usually tied to explicit state transitions:
Truth emerges because every output can be traced back through a grounded causal chain.
Language models break this assumption.
Their lineage is:
You can often trace why a token was likely, but not what external fact it corresponds to. The lineage explains plausibility, not truth.
This is the key distinction:
Determinism without grounding does not produce truth—it produces consistency.
The broken escalator stumble is consistent. It’s just wrong.
Humans recover from the broken escalator effect almost instantly. Why?
Because sensory feedback re‑anchors the system:
The lineage is overridden by reality.
Language models, by default, lack that anchor.
Without retrieval, sensors, tools, or verification mechanisms, the model has no external signal strong enough to interrupt its learned trajectory. It cannot “feel” that the escalator isn’t moving.
So the system continues forward, smoothly and confidently.
Once you view hallucinations through this lens, several things become obvious:
In other words, hallucinations are deterministic failures given insufficient grounding.
The system is doing exactly what its lineage tells it to do.
Telling a model to “be truthful” is like telling your legs:
“Only step forward if the escalator is definitely moving.”
That would require:
Which is precisely what we add when we care about grounded truth.
Truth is not a personality trait. It’s a property of the system architecture.
Deterministic Lineage + Grounding = Reliable Systems
The engineering lesson is straightforward:
For AI systems, grounding typically means:
These are the equivalent of handrails, signage, and visual cues on the escalator.
We don’t expect humans to reason their way out of motor after‑effects. We design environments that prevent them.
So instead of saying:
“The model hallucinated.”
A more accurate statement is:
“The model followed a deterministic lineage without a grounded truth anchor.”
That framing shifts the conversation:
Which is exactly where it belongs.
The broken escalator isn’t evidence that prediction is bad. Prediction is essential.
It’s evidence that prediction without grounding is brittle.
As we build AI systems that increasingly influence real decisions, the challenge isn’t to eliminate prediction. It’s to ensure that every predictive step has something solid underneath it.
Otherwise, the system will keep leaning forward—confidently, deterministically, and sometimes disastrously—on escalators that no longer move.
Next - When Probabilistic Systems (LLMs) Pretend to Be Deterministic: A Lineage Case Study – Link
Dive into keynotes, announcements and breakthroughs on demand.
Explore Now →The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.