The Certainty Tax
Legibility is a dividend in stable times but a tax in turbulance.
In The Invisible Blueprint, I used the phenomenon of the ant mill, where ants follow each other in a deadly, unending circle, to describe a pattern in organisations. This blog explores how governance is often where organisations unwittingly build their own ant trails, optimising for internal signals that become easy to mistake for reality.
Governance as an Operating System
One way to see this issue is to ask a blunt question: what, exactly, can the organisation perceive?
The blueprint is the set of things an organisation can reliably notice, classify, and reward. If “real work” is what survives funding rounds, stage gates, and reporting cycles, then the blueprint is less belief than infrastructure: decision rules, metrics, paperwork, and incentives. Governance doesn’t float above this.
Here, I am using the term ‘governance’ to refer to the dominant model of governance I have experienced across many corporates (with my visiting ethnographer hat on), characterised by stage gates, financial controls, audit requirements, and performance management systems. In that sense, governance is the mechanism that gives an organisation its sensory organs.
That mechanism gets described as “oversight”: committees, policies, approval forums, budget calendars. In practice it behaves more like an operating system. It defines what counts as credible, fundable, defensible, and therefore what can exist as an initiative at all.
It shapes the preconditions of a proposal. Which ideas can be stated without costing someone status? Which kinds of uncertainty are acceptable to admit? Which futures can be spoken in the organisation’s official dialect and be taken seriously?
There are domains where legibility and standardisation are critical, where work is repeatable, uncertainty is low, and optimisation improves results. Business cases, roadmaps, and risk registers enable comparison, coordination, and accountability. The issue emerges when these same formats serve as the universal entry fee for seriousness, irrespective of the proposed work type.
This pattern isn’t confined to corporate settings. Local and national governments experience the same issue: their spending reviews, Treasury evaluations, and separate departments all tend to favour work that is easy to measure, has a set timeline, and produces clear results, and the need for democratic accountability makes it even harder to see or change this pattern.
Underneath the visible procedures sits what systems dynamics would call a basin of attraction, a region of stable behaviour that pulls work toward whatever is easiest to approve, defend, and report. Governance creates the basin by defining what counts as evidence, what counts as prudence, and what counts as a deliverable.
The blueprint then stabilises the basin in return; categories start to feel like the natural shape of the world. What began as control becomes perception; perception becomes constraint. The institution ultimately charts its own path, as it reliably finds approval, funding, promotion, status, career survivability, and reputational safety there.
In the context of exploratory work, foresight, sensemaking, and strategic discovery, this systemic bias toward the measurable and defensible becomes what I call the Certainty Tax: the cost an organisation pays to operate inside that basin when the work resists the formats the basin rewards.
The Viability Problem
This distinction matters for viability, not just efficiency. Bill Sharpe’s Three Horizons model describes how organisations sustain themselves across time: Horizon 1 is the current operating model, Horizon 3 is the emerging future, and Horizon 2 is the transitional space where the two are negotiated.
Most governance architectures are Horizon 1 infrastructure. They can fund, evaluate, and legitimise work that operates within the current model. But activities like foresight, sensemaking, and strategic discovery belong to Horizons 2 and 3, and when legitimacy is based on Horizon 1 rules, the organisation may effectively manage the present but struggle to develop what comes next.
If governance provides the organisation with its sensory organs, then the development of those organs becomes crucial. In Stafford Beer’s Viable System Model, viability depends on a dynamic balance between System 3 (control and internal optimisation) and System 4 (external intelligence and adaptation).
Many governance architectures amplify System 3 while structurally under-protecting System 4. They become very focused on any deviations from the plan, budget issues, and compliance alerts, but they pay less attention to subtle outside signals, changes in strategy, or new challenges.
The Certainty Tax is the cost that occurs when System 4 activities have to be explained using System 3 language to be considered valid, or when governance structures focus too much on System 3 thinking, especially in unstable contexts.
The Selection Effect
Learning, foresight, and sensemaking under uncertainty don’t arrive as neat deliverables. They arrive as reframed questions, eliminated hypotheses, and weak signals that become strong only in hindsight. Yet many organisations set the same entry fee every time: business case, roadmap, risk register, KPI set, forecastable milestones.
If your work fits the template, those with budgets and vetoes can read it, and it meets an institutional need for certainty, even if that certainty is mostly performative. If the problem isn’t yet framed, if the route depends on what you learn, or if the value lies in engaging uncertainty rather than executing a known plan, the proposal fails before it can even be judged. It is viewed not as a failed project but as something that never met the initial criteria for one.
The Epistemic Filter
Governance quietly installs a theory of knowledge: what counts as evidence, what reads as competence, what looks like prudence, and what triggers suspicion. Although this theory is not formally documented, its application becomes clear when tested. Propose work without a risk register and monitor the results.
The response is not philosophical; it is administrative: decision rights, funding thresholds, reporting requirements, and audit trails. These mechanisms do more than just constrain action. They define organisational perception. People learn to identify acceptable uncertainties, reduce certain risks to checkboxes, and present unavoidable bets.
The result is that governance preferentially selects work with clean edges, stable requirements, and short feedback loops; work that provides a perception of managed risk.
Meanwhile, the practices that can best reduce uncertainty, iteration, learning, reframing, hypothesis tests, and reading weak signals often appear, in governance terms, as vague, hard to audit, or “not a real deliverable". In the best case, they’re treated as overhead. Worst case, they’re read as indiscipline.
And once that interpretation takes hold, the attractor strengthens: legible artefacts get rewarded, reward signals train behaviour, and behaviour produces more legibility. The basin deepens through use.
It helps to treat governance the way we treat measurement in science: as a filter that shapes what becomes observable. Leave the filter unchanged, and something subtler happens: the institution reshapes itself around the filter.
What gets measured becomes what gets valued. What gets valued becomes what gets done. Promotion tracks start to favour legibility. Intelligence becomes whatever fits the reporting format. Caution becomes whatever produces audit-friendly artefacts. Ambition becomes whatever can be named without sounding like speculation.
In practice, maintaining legitimacy requires deception: one must pretend that the future can be scheduled to secure the resources needed for exploration.
The Deepening Basin
Inside the basin, things can look efficient: variance drops, coordination improves, and accountability is straightforward to narrate. Globally, the organisation can become strategically fragile, optimising for coherence while losing the capacity for discovery.
AI changes the texture of this problem. It dramatically lowers the cost of producing certainty-shaped artefacts: plans, forecasts, roadmaps, and coherent narratives. When used effectively, it can enhance the delivery process. Used uncritically, it makes the attractor easier to inhabit.
When producing a plausible plan becomes cheap, the organisation can generate legibility at scale and confuse coherence for comprehension. This is a thread I intend to return to soon.
Fitting Governance to the Work
None of this discussion argues for scrapping governance. It argues for fitting governance to the kind of work an organisation needs. In public institutions the stakes are distinct: citizens should be able to see how money is spent. The tension isn’t between transparency and opacity; it’s between forms of rigour.
In corporate settings the logic appears different but often resolves in similar ways: quarterly earnings pressure, investor expectations, and internal performance targets create their own demand for legibility. Here, too, the issue is not whether discipline is necessary, but which form of discipline dominates: short-cycle financial predictability or longer-cycle strategic learning.
When “rigour” becomes synonymous with "predictability", the same conflation takes hold. If strategy requires exploration, the entry fee for seriousness cannot be “a tidy plan”. It has to include rigour that survives uncertainty: explicit hypotheses, learning milestones, stage-appropriate metrics, guardrails, time-bound experimentation and exploratory cycles, and funding models that treat early work as buying options rather than buying certainty.
In other words, don’t try to “escape governance”.
Design governance architectures that generate distinct basins of attraction, some geared towards execution and assurance, others for discovery and learning. (I use the term ‘design’ in its most iterative sense, not to suggest that we can predefine and engineer the answer.)
Otherwise, the selection effect continues: legibility consistently outperforms exploration, and the organisation labels it as prudence until it manifests as fragility.
Further Implications
This lens also touches dynamics I’m not unpacking yet: intrapreneurship, power, structural coupling and organisational orientation. It connects back to the Futures Triangle and my earlier Lagrangian framing of strategic dynamics.
The relevant point here is simpler: the weight of the past persists in governance, including Taylorist assumptions about control, specification, and decomposability, long after the factory floor disappeared.
It also shapes how failure gets narrated. In exploration, “failure” is often just learning. In governance, it’s treated as non-compliance with a plan. This mismatch is significant because it influences the types of projects people choose not to propose, which ultimately represents the most damaging aspect of the Certainty Tax.
Intellectual Lineage
The suspicion that institutions mistake what they can measure for what matters is not new. Political theorists, such as James C. Scott, have shown how states simplify reality to manage it.
Management thinkers like W. Edwards Deming have warned that metrics distort behaviour.
Cyberneticians like Stafford Beer have reminded us that sensing the future is a different function from optimising the present.
Complexity theorists like Dave Snowden have argued that tools designed for order can become destructive when applied to emergence.
The Certainty Tax is my way of making that tradition tangible in the everyday machinery of governance. It is not that measurement is bad or that structure is wrong. It is that the demand for certainty quietly reshapes perception itself. Over time, organisations stop losing control.
They lose sight.



