BookmarkSubscribeRSS Feed

Determinism, Probability, and the Cost of Getting This Wrong

Started ‎04-02-2026 by
Modified ‎04-02-2026 by
Views 80

The industry is moving quickly to embed large language models into systems that explain, automate, and increasingly act. Terms like agentic, autonomous, and understanding are now used casually — often to describe systems that generate fluent answers and run without human intervention.

 

This series exists because that casual language is masking a serious structural problem.

 

We are not just deploying new tools.


We are quietly redefining authority — and doing so on foundations that were never designed to carry it.

 

The Core Problem This Series Addresses

 

Large language models are probabilistic systems.


They are optimised to produce plausible continuations, not provable truth.

That is not a criticism — it is a design fact.

 

The problem arises when those systems are:

  • treated as systems of record,
  • wrapped in automation and labelled “agentic”,
  • or used to drive decisions in environments that require completeness, traceability, and auditability.

At that point, probability is no longer assisting intelligence — it is replacing authority.

And that is where things begin to break.

 

Why This Matters: Real‑World Impact

 

  1. Integration and Transformation Projects

Integration projects fail not because systems are complex, but because dependencies are misunderstood or missed.

When probabilistic outputs are treated as complete:

  • upstream and downstream impacts are under‑scoped,
  • conditional logic is silently ignored,
  • reuse across jobs, environments, and time is missed.

The most dangerous failures are not wrong answers — they are missing answers that look confident.

This is why integrations collapse late, during cutover or refactor, when it is most expensive to recover.

 

  1. False “Agentic” Implementations

Much of what is currently described as agentic is simply:

  • an LLM producing plausible output,
  • embedded in a workflow,
  • running without a human in the loop.

Automation is not agency.

An agent is allowed to act.
Acting requires authority.
Authority requires truth, not probability.

When probabilistic systems are given implicit authority, organisations end up with:

  • actions based on incomplete understanding,
  • systems that appear autonomous but rely on silent human correction,
  • confidence that outpaces correctness.

This series challenges the idea that chaining, prompting, or orchestrating LLMs turns them into agents. It does not.

 

  1. Regulatory and Governance Risk

In regulated environments, plausible is not good enough.

Regulation demands:

  • deterministic lineage,
  • reproducible outputs,
  • explicit handling of uncertainty,
  • and the ability to prove how an answer was derived.

Probabilistic systems do not fail loudly when they miss something.


They fail silently.

 

A silent omission in lineage, impact analysis, or traceability is worse than an explicit gap — because it invites action under false confidence. That is how governance narratives collapse under scrutiny.

No amount of prompting or reinforcement learning changes this, because the limitation is architectural, not behavioural.

 

  1. AI Built on a False Foundation

Perhaps the most important consequence is the one least discussed.

AI systems can only reason safely if there is a stable, deterministic definition of reality underneath them.

When that foundation does not exist:

  • the AI has nothing solid to reason over,
  • “intelligence” becomes narrative,
  • and decisions drift as outputs subtly change between runs.

Building AI on top of probabilistic truth is not innovation — it is compounding uncertainty.

 

This series argues for a simple but unfashionable idea:

 

Before intelligence, there must be determinism.

 

Not instead of AI — but before it.

 

What This Series Is — and Is Not

 

This is not an anti‑LLM series.
It is not a critique of model capability, fluency, or usefulness.

LLMs are exceptionally good at:

  • explanation,
  • summarisation,
  • navigation of known structure,
  • assisting human reasoning.

What they must not be asked to do is define truth for systems that require authority.

This series is about placing AI correctly, not rejecting it.

 

How to Read What Follows

 

The posts that follow move deliberately:

  • From intuitive failure modes,
  • to industry mislabelling of “agentic” systems,
  • to the role of determinism as infrastructure,
  • and finally to concrete, reproducible technical evidence.

Each piece builds on the last. None rely on hype, fear, or speculation.

 

If you disagree, that’s the point — but disagree on architecture, authority, and system design, not optimism.

 

The Question This Series Leaves You With

 

If a system is allowed to act — and its understanding cannot be proven, replayed, or audited — "Who is really in control?"

 

This series exists to force that question into the open.

 

Next Why probabilistic language models are being mistaken for agents — and why systems expose the flaw - Link

 

The Full Series

 

  1. Determinism, Probability, and the Cost of Getting This Wrong - Link
  2. Why probabilistic language models are being mistaken for agents — and why systems expose the flaw - Link
  3. Stop Calling It Agentic: You’ve Just Automated an LLM - Link
  4. The Myth of Agentic Code Understanding – A Technical Explanation - Link
  5. The Minimum Deterministic Substrate What Must Be True Before AI Is Allowed to Act - Link
  6. Determinism Is the Forgotten Path to Success: Why the hard path is often the only one that actually scales – Link
  7. The Broken Escalator, Deterministic Lineage, and the Problem of Grounded Truth in AI - Link
  8. When Probabilistic Systems (LLMs) Pretend to Be Deterministic: A Lineage Case Study – Link
Contributors
Version history
Last update:
‎04-02-2026 04:13 AM
Updated by:

Catch up on SAS Innovate 2026

Dive into keynotes, announcements and breakthroughs on demand.

Explore Now →

SAS AI and Machine Learning Courses

The rapid growth of AI technologies is driving an AI skills gap and demand for AI talent. Ready to grow your AI literacy? SAS offers free ways to get started for beginners, business leaders, and analytics professionals of all skill levels. Your future self will thank you.

Get started

Article Tags