Beyond Dead Reckoning
On AI, organisations, and the problem of navigating without a map
I ran across an old colleague from my Thales days recently, someone I hadn’t seen in person for roughly sixteen years.
We were chatting about an AI workshop I was going to lead. I was trying to explain what had been occupying my mind about AI and organisations. He paused for a moment before addressing the issue directly.
“It’s just like something you’re familiar with from the old days,” he said. “The Kalman filter.”
He was right. An idea from navigation combined with AI led me to explore another rabbit hole. This piece is the result of that conversation.
For some time now, I have been exploring the question of what comes next in artificial intelligence, the horizon beyond the current transformer-heavy paradigm.
This journey is anchored in my work on HScan11, a futures-scanning project co-authored with David Sloly. Through HScan11, we experiment with scanning for early signs of new types of AI that use biological methods, biological computing, and the potential interactions between humans and non-humans.
The project quickly raised a question about how any intelligent system, artificial or organisational, maintains its grip on a changing world.
The current generation of AI systems reveals a curious paradox. Today’s models are genuinely remarkable. They reason across domains, write and debug complex code, synthesise research, and perform on standardised tests at levels that have repeatedly surprised even their creators. The progress over the past five years has been extraordinary by any measure.
And yet something structural remains missing.
Whatever internal representations these systems develop, they lack a robust mechanism for grounding their outputs in a model of how the world actually works. They can give fluent, plausible, and often correct answers, but they can't reliably tell what they know, what they're inferring, and what they're making up. Capability and groundedness are not the same thing.
There is a further complication. When humans interact with these systems, something intriguing happens. Humans often provide the missing grounding, the model of how the world actually works, during their interactions with these systems. The user brings context, intention, causal understanding, and the ability to catch confabulation when it occurs.
Zoom out to the broader composite system that includes human and AI interaction within its boundary, and the world model is no longer entirely absent. It is distributed. The human carries what the model lacks.
This reframes how we can understand current AI systems. They are not autonomous intelligences with a missing component. They are one part of a sociotechnical system in which the human is not incidental but load-bearing.
The orientation, the grounding, and the error correction are not features of the model alone. The human in the loop contributes to these aspects. Remove that human, and the system loses something it was never designed to provide for itself, such as the ability to adapt to complex, real-world scenarios and make nuanced decisions based on context. This nuance is often missing from the narrative around AI tools.
It is here that the work of Yann LeCun becomes relevant. LeCun, ex-Chief AI Scientist at Meta and one of the architects of modern deep learning, has argued that this structural dependency on human grounding is precisely what limits current AI systems and why a different architecture is needed.
When we consider the model in isolation, we can clearly see the deficit he identifies. In systems where a skilled human is actively involved, checking results, spotting mistakes, and providing expert knowledge, the combined system is stronger than the model by itself.
But that qualification carries hidden weight. Humans verify selectively and inconsistently. Outside their domain of expertise, under time pressure, or when outputs are fluent and confident, people tend to trust rather than check.
The very plausibility that makes these systems useful is also what makes confabulation challenging to detect. A capable human in the loop is a genuine safeguard. A passive or overloaded one may be closer to a rubber stamp.
The distributed world model, in other words, is only as good as the human’s willingness and ability to exercise it. Which is precisely why the architecture question becomes urgent as systems move toward greater autonomy and why “a human in the loop” cannot be treated as a permanent solution to a structural problem.
LeCun’s proposed World Model architecture is his answer to this challenge, a radical pivot from Large Language Models (LLMs) dominating today's discourse towards a modular cognitive framework designed for autonomous intelligence.
To me, the architecture looks less like a chatbot and more like the navigation and control systems I studied as an engineer, echoing ideas found in the OODA loop, active inference, and classical Kalman filtering.
The next generation of AI may be built around better navigation rather than better language.
The Navigator’s Dilemma: The Ghost of Dead Reckoning
In classical navigation there is a concept called dead reckoning.
You estimate your current position by taking your last known point and projecting forward based on speed and direction.
It works until it doesn’t.
The fatal flaw is compounding error.
A tiny half-degree error in your compass becomes more than a half-degree error. It becomes the baseline for every subsequent calculation. Over long distances the drift compounds. A small misalignment at the start eventually becomes miles of separation from reality.
Experienced navigators never rely on dead reckoning alone. They constantly re-fix their position using external references: the stars, GPS, landmarks, and radar.
Without those corrections, drift becomes inevitable.
LeCun’s critique targets the base autoregressive architecture specifically, not the various scaffolds and wrappers that can be bolted on top.
At its core, the autoregressive objective is simple: predict the next token given what came before. There is no native mechanism for causal reasoning, no built-in capacity to simulate consequences before committing to output, and no architectural imperative to check whether the trajectory makes sense.
Tool use, search, chain-of-thought prompting, and external memory can partially compensate for these limitations. But LeCun’s argument is that these are prosthetics, not solutions. They address symptoms without touching the underlying architecture. A system that requires external scaffolding to plan is not, in any deep sense, a planning system.
But there is a further implication that the navigation analogy makes vivid.
If each step carries even a small probability of error, the likelihood of staying perfectly aligned decays rapidly as the sequence grows. Small deviations build up in lengthy responses until the model is reasoning within a partially invented context. The system has no reliable way to pause, re-observe the world, and reset its position. It is navigating entirely from its own internal trajectory.
This phenomenon is one way hallucination occurs, not through deception, but through drift. The model isn’t lying. It has simply lost its position.
It is worth being precise here. Hallucination can happen for several reasons: missing information in training data, goals that prioritise sounding correct instead of being accurate, unclear prompts, and not having ways to check or retrieve information.
Compounding drift is one failure mode among several. But it is the failure mode that the dead reckoning analogy makes structurally visible, and it is the one that LeCun’s architecture is specifically designed to address.
LeCun’s objection is structural, not incidental. The problem is not that these models make mistakes. It is that the base autoregressive architecture has no native mechanism to detect when it has left the rails.
When Organisations Navigate the Same Way
The deeper story is about more than artificial intelligence.
It is also about how organisations navigate the future.
Many strategic planning systems operate in a way that looks remarkably similar to an autoregressive model. Last quarter’s results become the starting context for the next forecast. Next year’s targets are extrapolated from last year’s performance. Budgets are allocated based on historical patterns of return, which reflect previous financial performance and anticipated market trends.
Each decision becomes the baseline for the next. Again, we might see scaffolds and wrappers in some contexts, but they do not address the underlying structural weakness.
This pattern has appeared consistently throughout my own work across organisations in both the private and public sectors.
Planning assumptions built on past performance and linear extrapolation are not simply a failure of imagination. They are often the rational response to the demands of process and governance, as stakeholders frequently prioritise measurable outcomes and compliance over innovative approaches.
Extrapolation is legible. It is auditable. As I discussed in the Certainty Tax, it produces the kind of certainty that approval processes reward.
Over time, the planning cycle quietly becomes theatre. The outputs are designed to satisfy governance rather than genuinely interrogate the environment, leading to a superficial understanding of the challenges and opportunities that organisations face.
Organisations go through the motions of strategic inquiry while questioning foundational assumptions; scanning for weak signals and modelling genuinely different futures gets crowded out by the demands of producing a credible-looking plan.
The assumptions don’t just persist. They get institutionalised.
But organisations rarely extrapolate numbers alone. They also carry forward the beliefs and assumptions that once made those numbers possible. Every period of success leaves behind a set of interpretations about how the world works: what customers value, which technologies matter, where risk lies, how competitors behave, and what constitutes a “credible” strategic move.
Over time these interpretations become deeply embedded in organisational thinking and belief systems.
Success, in this sense, becomes a powerful form of cognitive entrainment. The organisation becomes conditioned not only to repeat past behaviours but also to interpret new signals through the same mental model that once produced success.
This is why strategic drift is often so difficult to detect internally. The organisation is not simply projecting past performance forward. It projects the worldview that produced that historical performance.
This is not a new observation. The cybernetician Stafford Beer recognised this in the 1970s in his Viable System Model. Beer’s main idea was that successful systems need to keep a balance between two functions: System 3, which improves current operations, and System 4, which looks at the outside world, predicts future possibilities, and helps the organisation adapt.
When the balance between System 3 and System 4 breaks down, the organisation loses its capacity to update its internal model of the world. It becomes, in Beer’s terms, no longer viable because its model of the environment has drifted from the environment itself.
Beer called this a failure of adaptation. We might now call it organisational dead reckoning.
For long periods of the twentieth century, this approach worked reasonably well. Markets evolved slowly. Technological change followed relatively stable trajectories. Economic cycles were measurable and manageable. Under those conditions, linear extrapolation was often sufficient.
But we are no longer operating in that kind of environment.
Across multiple domains, AI capability, biotechnology, energy systems, geopolitical alignment, climate dynamics, war, and change are increasingly non-linear. Compute scales exponentially. Network effects accelerate adoption curves. Feedback loops amplify small innovations into systemic shifts.
When organisations continue to plan using linear projection inside environments undergoing exponential transformation, the same failure mode takes hold.
Error begins to compound.
With each cycle of planning based on outdated assumptions, the internal model drifts further from the world it is supposed to describe. Eventually the organisation finds itself making confident decisions inside a context that no longer exists.
This is the strategic equivalent of hallucination.
They are navigating complex, accelerating systems using planning methods designed for linear worlds.
In effect, they are steering the future the same way autoregressive models generate language: by extending the past token by token.
The Architecture of Autonomy: LeCun’s Six Modules
Tool use, retrieval, and agentic scaffolding can extend what autoregressive systems are capable of. But LeCun’s architecture asks a prior question: whether the foundation itself is right. His answer is a modular system built around a World Model capable of simulating potential futures before taking action.
LeCun proposes a modular architecture of six interacting systems, but three are philosophically central to the argument here.
The World Model is the main new thing, a predictive simulator that can picture how the environment will change before any action is taken. Where an autoregressive model continues the past, the World Model rehearses possible futures. It is the mechanism that replaces dead reckoning with active position-fixing.
The Cost Module defines what the system is trying to avoid or achieve, the energy landscape that shapes behaviour. And the Configurator acts as the executive controller, adjusting objectives and priorities over time as circumstances change.
Supporting these are a Perception module (reading the current state of the world), Short-Term Memory (storing recent context), and an Actor (proposing and executing actions). Taken together, these components form a closed-loop intelligence system that continuously observes, simulates, evaluates, and acts, rather than simply continuing.
In engineering terms, it resembles a high-dimensional control system, closer to the navigation algorithms used in aerospace than the text generators that dominate today’s AI discourse.
One such algorithm is the stated inspiration for this piece, the Kalman filter, which was developed in the 1960s for aerospace guidance systems and is still used in GPS, missile guidance systems, and aircraft autopilots.
It operates by constantly comparing two things: what the system expects to be happening based on its internal model and what sensors report is actually happening. The difference between prediction and observation drives a constant correction. The system is continuously corrected toward reality because it never stops checking.
That is the mode of operation at the heart of LeCun’s World Model architecture, and one I believe points to the importance of foresight in organisations.
JEPA: Why Meaning Beats Pixels
The training framework underlying this architecture is called Joint-Embedding Predictive Architecture (JEPA), and it is part of how the system avoids compounding error in the first place.
Most generative models attempt to reconstruct the world in exhaustive detail, predicting every pixel in an image or every token in a sentence (a metaphorical parallel with metrics and measurement?). LeCun argues this approach is deeply inefficient. The world contains enormous amounts of high-entropy noise, details that are difficult to predict and largely irrelevant to intelligence.
JEPA takes a different approach. Instead of trying to predict the raw data, it predicts hidden representations, which are abstract summaries that reflect the meaning of the environment instead of just how it looks.
Think of watching a dog run. A generative model attempts to predict the exact position of every strand of fur. A JEPA model predicts the meaningful invariant; the dog is moving forward.
By focusing on deeper patterns rather than surface details, the system learns a far more efficient representation of reality, which allows it to generalise better across different scenarios and improve its predictive capabilities.
It doesn’t predict the noise. It anticipates the structure. And that structural understanding is what LeCun argues would make the World Model’s simulations reliable rather than just plausible.
A Pattern Hiding in Plain Sight
LeCun’s world model is not just an AI design pattern. Versions of this architecture appear across multiple disciplines. John Boyd described it in military strategy through the OODA loop. Scenario planners operationalised something similar through the exploration of multiple plausible futures. Friston found it in the structure of biological cognition.
Each field discovered the same underlying principle: intelligent systems do not predict the future; they simulate possibilities before acting.
That convergence deserves a fuller exploration, which I will return to in the follow-up.
Explaining the Idea in Two Minutes
When explaining this shift to non-technical audiences, I recommend a simple analogy.
Current AI systems are like brilliant parrots, a metaphor drawn from Bender et al.’s influential 2021 paper “On the Dangers of Stochastic Parrots".
They know an enormous number of words and patterns. When they say “the glass fell", it is because those words frequently occur together in the data they were trained on. But they do not actually simulate what happens when a glass falls.
Yann LeCun’s proposal is to move from the Parrot to the Simulator.
Instead of guessing the next word, the system runs a small internal movie.
If the glass falls, does it shatter?
If the car turns, does it collide?
If the robot moves, does it succeed?
The goal is not to memorise the world.
The goal is to simulate it well enough to act intelligently within it.
From Dead Reckoning to Navigation
Our work in HScan11 suggests we are approaching a strategic inflection point in AI development.
Simply scaling autoregressive models will likely continue to produce impressive systems. But scale alone may not deliver the kind of grounded reasoning and autonomy required for real-world agency.
LeCun’s proposal points toward a different path. Predictive world models ground intelligence in the latent structure of the environment, moving us closer to systems capable of navigating reality.
The shift is from predicting words to anticipating states. From continuation to simulation. From dead reckoning to active navigation.
The parrot knows an enormous number of routes. The navigator knows where they are.
That distinction between fluency and orientation turns out to matter far beyond artificial intelligence. The same problem appears in military strategy, neuroscience, and the way organisations plan for futures they cannot fully see. That is where the next piece picks up…



