The first time I stumbled upon the concept of abductive reasoning was when I cracked open the pages of "Exposing the Magic of Design" by Jon Kolko, a book recommended to me by my old time boss and mentor, Didier Boulet. As I delved into its depths, I found that this concept articulated a hunch I had about design, and more latterly, also opened up a new world of exploration into the domain of futures thinking.
The hunch was important. In my early days in the engineering and technical research world, I was involved in the development of C4I (Command, Control, Communication, Computers & Intelligence) systems. I felt constrained by an overly rational, linear, reductionist, and somewhat process-driven engineering approach. Additionally, I felt that many of the challenges we faced were more complex (in the truest sense). This led me to start exploring other avenues.
Through a rather convoluted journey, I discovered many adjacencies, including the US DoD Command & Control Research Programme (CCRP) and their focus on applied complexity science, the Santa Fe Institute and their focus on complex phenomenon and cross-disciplinary collaboration, and the work of Dave Snowden and Cynefin, which I have followed avidly ever since.
The concept of complexity struck a chord with me, not only due to my frustrations with the defence engineering culture, but also because of my experiences spanning over a decade in the military and my time at university. Paul Campbell, a physicist and my academic supervisor, who also played a pivotal role in my career, once said,
"The 20th century was the century of physics, the 21st will be the century of complexity."
Reflecting on my experiences, I realise that they had prepared me to think differently when I entered the world of engineering.
All of this laid the foundation for breaking out of the engineering mould. With strong business sponsorship and a healthy dose of naive optimism, I ventured into the world of innovation culture and intrapreneurialism. This led me to the discovery of design thinking in 2011, and a fascinating journey ever since. A journey that slowly brewed the hunch.
Beyond the ‘Jazz Hands’ of colourful post-it notes, bean bags, and design thinking workshops, I sensed that there was a deeper utility in design. Participatory processes, cross-disciplinary collaboration, creativity, imagination, and diversity were fundamentally important. Kolko’s book provided the language to articulate my intuition. I came to see ‘abductive reasoning’ as a process of making sense of the world through a combination of experience, logic, intuition, and creativity. The ability to synthesise diverse ideas and data points into coherent narratives plays a crucial role in design and strategy, allowing us to envision new possibilities for the future.
I also saw the importance of abductive sensemaking when engaging with complex problems, where a dynamic interplay of the unknown, unknowable, and unimaginable call for this type of reasoning. This in turn calls for human experience, creative exploration, and intuition. Mariana Zafeirakopoulos does a great job of unpacking this further in her article, "Why complex problems need abductive reasoning."
As I began to draw connections between abductive reasoning and its role in futures thinking, I stumbled upon an article from the University of Birmingham, Thinking about thinking about futures.
The article describes a workshop that explored how approaches to imagining our futures can be improved. The authors hypothesised that imagination and abductive thinking are crucial to enabling individuals to envision potential futures. While reading this, I couldn't help but think about the potential of artificial intelligence.
With the advent of large language models like GPT-4 and the ongoing pursuit of artificial general intelligence (AGI), I pondered a question: Can these models use abductive reasoning, and how do their abilities compare to human insights, and how might this impact futures thinking? I delved into the world of AI, focusing on the difference between AI hallucination and abductive creative leaps and insights.
AI hallucination refers to the phenomenon where an AI system generates outputs that seem plausible but are ultimately nonsensical, while abductive creative leaps involve the ability to make insightful connections between seemingly unrelated ideas. The limitations of AI in abductive reasoning can be attributed to several factors.
First, AI models like GPT-4 are predominantly based on the application of statistical models for pattern recognition, which makes them excellent at identifying trends and correlations but less adept at generating truly novel ideas. Additionally, AI systems often lack the ability to engage in sensemaking, which is a critical component of abductive reasoning and involves the capacity to discern meaning from complex and ambiguous situations.
Human creativity, especially in the context of futures thinking, is deeply rooted in our capacity for empathy, emotion, and cultural understanding. This human-centric aspect of design and futures thinking is something that AI models, at least for the time being, are unable to fully replicate. This view was further reinforced by this excellent opinion piece in the New York Times, We Shouldn’t be Scared by ‘Superintelligent A.I.’
The author suggests that much of the thinking around the potential of AI underestimates the complexity of general, human-level intelligence and that the notion of superintelligence without humanlike limitations may be a myth. That hasn't stopped people trying. I also uncovered this book chapter from 2013, AI Approaches to Abduction.
Abductive reasoning remains a vital component of design, futures thinking, and strategy, allowing us to synthesise diverse ideas and navigate complexity. While AI models like GPT-4 have made significant strides in recent years, they still have a long way to go before they can truly match the creative and intuitive powers of the human mind.
In these times of AI hysteria, we should not lose sight of the importance of our own unique gifts in shaping the future. While AI models like GPT-4 may continue to advance and augment our understanding of the world, they are not yet capable of replacing the abductive creative leaps and insights that characterise the human experience.