“The incalculable element in the future exercises the widest influence and is the most treacherous.”
- Thucydides (as cited by Zachery Tyson Brown)
Thucydides reminds us that tomorrow is always partly unknowable; nowhere is that tension clearer than in today’s race to deploy generative AI.
The knowledge landscape has been altered by generative AI, which offers speed, breadth, and an almost uncanny fluency. With incredible accuracy, it sorts through the past, providing us with summaries, syntheses, and simulations on an unthinkable scale just ten years ago.
But as Zachery Tyson Brown points out in his excellent article, “The Incalculable Element: The Promise and Peril of Artificial Intelligence”, there are significant boundaries to AI's knowledge and more serious dangers if we take its usefulness for granted.
So we arrive at a deeper, more human question: what counts as knowledge?
In this eXplorulation, I expand upon this and explore questions I have been posing for a while: ‘What is the unique value of human and collective intelligence?’ In what situations is human intelligence still differentiated from AI and superior? And why might our most essential resources be intuition and imagination rather than calculation in a world of powerful machines?
There is a difference between the wild, incalculable, relational intelligence that is stubbornly and uniquely human and the limited, statistical knowing of machines.
The Various Ways of Knowing
Knowledge is not one thing. It comes in various forms, each of which is intertwined with the others:
Explicit and propositional: AI's native domain consists of facts, logic, and codified data.
Tacit and Embodied: The abilities, intuitions and skills we carry but often struggle to express or articulate in words.
Cultural and Emotional: The capacity to interpret a story, grasp subtleties, or sense mood, all influenced by shared meaning and context.
Collective and Intergenerational: Knowledge that develops through discussion and lived experience over time and across communities.
Indigenous and Relational: A living web inseparable from land, language, ethical and moral relationships
Seen through this lens, it becomes clear that generative AI occupies only a small slice of that spectrum.
Generative AI excels in the explicit knowledge domain. It is a master of both the ordinary and the codifiable. But when we get deeper into the implicit, contextual, and communal, the machine's confidence starts to diminish. It cannot live in a story or sense the echo of a tradition. It cannot live through what it says it understands, nor can it know it.
And nowhere is that limitation more glaring than when we look ahead, into futures that data alone cannot reveal.
Forecasting and Imagination
AI makes predictions by modelling what is likely based on historical trends. However, the radical, open, and unfinished future cannot be inferred from data alone.
A fundamental difference here is that imagination conjures up what is not yet there, whereas prediction extrapolates from what is.
Probabilities are simulated by AI. People have imaginations. We keep jumping, making assumptions, and asking “What if?” even after the data is depleted. Imagination is not a luxury but a strategic and cognitive requirement for surviving in a changing world.
Hannah Arendt noted that the ability to start over, to break the cycle of cause and effect, and to accomplish what hasn't been done yet is a prerequisite for taking action. That ability is still very much human.
The Cost of Calculation: AI and Organisational Blind Spots
An imaginative deficit has organisational consequences, what I call the cost of calculation.
In many organisations, the future is increasingly framed through a narrow lens dominated by the promise of optimisation, measurement and certainty. This pollutes our thinking, creating dominant regimes of anticipation. AI is imagined less as a creative partner and more as a productivity, efficiency, and cost reduction force.
Narratives of AI-driven transformation tend to prioritise:
Increasing output while lowering headcount
Automating decisions once made through experience and moral reasoning
Accelerating workflows in service of the bottom line
Framing knowledge workers not as meaning-makers but as inefficiencies to be streamlined
These stories align perfectly with the major AI frontier firms’ commercial ambitions and the logic of capital that governs most strategic planning. In this worldview, human capital cannot be nourished or deepened. It is a cost to be managed, re-skilled, or displaced.
Taken together, these tactics create an impressively lean enterprise, on paper. But in privileging only what is measurable, we risk losing the immeasurable:
The subtle knowledge of how to mediate trust within a team
The cultural nuance required to sense an emerging shift before it becomes a trend
The moral discernment to say, “Just because we can doesn’t mean we should.”
We mistake optimisation for intelligence, efficiency for wisdom, and calculation for care. In doing so, we create futures that undervalue the qualities that make us most adaptive and resilient in times of change.
In an age of intelligent machines, the true competitive advantage may not be how well we deploy AI but how well we remember what it cannot replace.
This brings us to one of the most challenging aspects of futures work: detecting those early, ambiguous signals of change that defy easy quantification.
The Incalculable and Weak Signals
Weak signals, subtle, ambiguous, and frequently marginal, are the first signs of change in strategic foresight. These are the anomalies that signal change, the tremors before the earthquake.
It takes more than just data analysis to find a weak signal. It all comes down to recognising that something is off and something new might appear. Imagination, narrative sensitivity, and intuition are necessary for this recognition. Currently, AI cannot accomplish this, not due to a lack of knowledge, but rather to a lack of worldly orientation. It is not a part of a context. It is incapable of sensing what truly matters. It does not know.
Weak signals exist in the realm of unarticulated, instinctive awareness, the sphere of the intuitive and immeasurable.
The Human Domain: Meaning, Relationship, and Judgement
That circles us back to Zachery Tyson Brown’s warning against “falling asleep at the wheel,” which involves entrusting our judgement to algorithms that have been trained on patterns from the past.
The deeper threat, however, is cultural: losing the ability to think, imagine, and relate in the most critical ways.
There is more to human intelligence than just reason. It’s:
Contextual: We contribute discernment, which includes the capacity to read a room, recognise irony, and balance conflicting morals.
Embodied: Our knowledge is based on experience rather than just calculations; we feel and live it.
Imaginative: We dream, conjecture, and speculate. What if we dare to ask?
Relational: Through memory, rituals, cooperation, and storytelling, we co-create knowledge in our communities.
Adaptive: We develop by creating new things and refining what has already been done.
We are creators of meaning rather than rational keepers of information.
These facets of intelligence hint at a broader ecology where human and machine each find their rightful niche.
In Search of a Living Ecology of Knowledge
What does this mean for us in the age of the generative machine?
AI as Utility: Let's use GenAI's strengths, which include exploring combinatorics, information retrieval, and complexity compression. It is incredibly effective as a prosthesis for explicit knowledge.
Horizon of Human Intelligence: Our duty and gift still extend to the more expansive, messy, relational forms of knowing. Here, we face the unknown, not with maths, but with curiosity.
Hybrid Futures: The most exciting future is one in which artificial intelligence humbles us by exposing the limitations of codified knowledge and the need to develop our underutilised abilities.
Even if frontier models someday aspire to those human domains, one can’t help wondering: what would they have to become?
AI needs to be capable of more than just processing language and identifying patterns if it ever matches human intelligence. It needs to struggle with meaning. It must engage, feel, suffer, adjust, and perhaps imagine. The question remains:
Do we want a machine that can dream? Or would we like to learn how to manage our dreams better?
Human Intelligence's Enduring Gift
What is the worth of both individual and group intelligence, then?
It is this: Human and collective intelligence are living, evolving processes of meaning, judgment, and creativity. They are dynamic, adaptive, and relational, constantly transforming ambiguity into insight and uncertainty into possibility. Rather than being static repositories, they are ever-changing networks of experience, interpretation, and imagination.
We contribute context and discernment, the capacity to perceive the unsaid and balance conflicting values.
We represent experiential and tacit wisdom, which is knowledge developed via shared practice and lived experience rather than that expressed in words.
We create the new by using creativity, making abductive leaps, and having the guts to ask “what if?” even when all the data has expired.
We produce and maintain meaning through the intergenerational weaving of story, place, and purpose, cultural narrative, and collective memory.
In addition to optimising for the past, we adapt and evolve by recognising ambiguity, embracing it, and working with others to co-create the future in communities.
The value of human and collective intelligence lies in how we know each other. We can understand, care for, imagine, and act in ways that surpass the sum of our individual contributions. In a world of powerful machines, such intelligence is our enduring gift and our most significant responsibility.
We will not outcompute the machine. But we can out-relate it. Out-imagine it. Out-create it.
By doing this, we preserve the incalculable factor that determines our future and serves as a reminder of who we are.
The journey continues…