In an era where AI can write code, translate languages, and even compose poetry, there remains one cognitive domain that is exclusively human: abstract thinking. Instead of fearing or mocking the generative errors or hallucinations produced by early versions of this technology, we should embrace it as a powerful tool that enhances human potential.
These generative errors and "hallucinations" are not to be feared; rather, they should be recognized as extensions of ourselves and as an unprecedented means to amplify human thought, provided they are used wisely.
We're witnessing something fascinating in 2025. OpenAI's latest models hallucinate up to 79% of the time on reasoning tasks (admittedly, a third of the time). Anthropic's CEO admits, "We do not understand how our own AI creations work." We're simultaneously entering an age where these same systems can process millions of documents, write sophisticated code, and navigate complex linguistic relationships with unprecedented speed.
This paradox reveals a fundamental truth: AI excels at pattern recognition and statistical processing, but it cannot truly think in the abstract sense that defines human cognition.
Abstract thinking—our ability to conceptualize intangible ideas like freedom, justice, humour, or love—emerges from something AI cannot replicate: mammalian consciousness's organic, multisensorial nature. When you contemplate the concept of "wisdom" or envision a hypothetical future scenario, you're drawing from a lifetime of embodied experiences, emotional memories, and sensory inputs that no statistical model can truly simulate.
Consider how abstract thinking manifests in your daily life in several scenarios where no AI is capable of “deeply helping you”:
As the source research shows, abstract thinking involves "ideas and principles that are often symbolic or hypothetical"—concepts that require the kind of contextual, experiential understanding that emerges from living in a physical, social, and emotional world.
Agentic AI (see our ECOChat) is recreating our judgment, hesitations, and thought processes, but mechanically and only with data. Humans don’t work like that, and even though statistics and data tell ML that the translation of a sentence is such, “we know” that in context, the translation is a different one. That’s something that speakers of highly contextual languages like Japanese know very well: it’s not that the translation memory was wrong; the translation was right, but it was contextually wrong.
Nobody tries to compete with a calculator in arithmetic or challenge a spreadsheet in data organization. These tools don’t diminish human mathematical thinking—they liberate it. Similarly, we must view AI tools as the ultimate intellectual amplifier, not a cognitive competitor.
For the non-initiated, let’s recall that today's AI tools are based on large amounts of monolingual language model training datasets, speech systems with speech datasets, and so on. The algorithms behind the training methods are generally based on the Transformer technology, which works very well as an encoder-decoder. Transformers technology is an advanced method used in artificial intelligence that helps machines understand and generate human language efficiently. Imagine reading a book: rather than processing words strictly one after another, Transformers enable the system to examine all words simultaneously, capturing relationships and context, much like how our brains instantly connect meanings and ideas when reading a sentence.
Transformers represent a revolutionary breakthrough because they solved a fundamental bottleneck that plagued earlier AI systems. Before Transformers, AI had to process language sequentially—like reading word by word with a finger—which was slow and caused the system to "forget" important context from earlier in long texts. You may notice this still happens when prompts are too long, with too many instructions. The Transformer's "attention mechanism" is like having a photographic memory that can instantly recall and weigh the importance of every word in relation to every other word, no matter how far apart they are, but the longer and the more complex the prompt, the higher the chances are that the LLM will perform the last requests better than the initial ones.
The encoder-decoder design functions like an advanced translation system: the encoder acts as a diligent listener, absorbing and mapping out the relationships within the input text to create a comprehensive internal representation. The decoder, in turn, serves as an articulate speaker, generating appropriate responses word by word while continually referencing both the original input and the responses it has already produced. What makes Transformers truly remarkable is their ability to scale. The more data and computing power you provide, the more sophisticated their understanding becomes. They learn by analyzing billions of text examples, uncovering subtle patterns in language, from basic grammar to complex reasoning, cultural references, and even creative expression. This scalability is why the same basic transformer architecture can be utilized for a wide range of applications, including language translation, code generation, creative writing, and scientific research assistance.
My point is then, why compete with machines at scaling? Let’s use them as tools for what they are:
With human oversight, these capabilities make us humans orders of magnitude more productive. Expert professionals who understand their domains can now leverage AI to handle the computationally heavy lifting while focusing their uniquely human cognitive resources on the abstract challenges that matter most.
The programming job market offers a telling case study. Young developers, lacking deep experience, often over-rely on AI-generated code without understanding its implications. They're competing with the tool rather than directing it. Meanwhile, seasoned developers use AI as an incredibly powerful assistant, applying their abstract understanding of software architecture, user needs, and system design to guide and refine AI outputs.
This pattern repeats across industries: those with strong foundational knowledge and abstract thinking skills leverage AI most effectively, while those without that cognitive foundation struggle to differentiate between useful and problematic AI outputs.
Even as researchers attempt to address AI limitations through Knowledge Graphs and Retrieval-Augmented Generation (RAG) systems—techniques that structure information to reduce hallucinations—these approaches still operate within the fundamental constraints of statistical processing. Knowledge graphs can represent relationships between concepts, and RAG can provide more reliable information retrieval, but neither can replicate the experiential, embodied cognition that enables true abstract thinking.
An "electronic brain" processes probabilities and correlations. A human brain integrates sensory experiences, emotional responses, social context, and imaginative projection into genuinely novel conceptual frameworks. The difference between our organic brains and “electronic brains” is not simply architectural, it's deeply ontological.
As we advance toward agentic AI workflows capable of automating complex processes across multiple languages and domains, organizations that will thrive are those that recognize the ideal division of labour between human and artificial intelligence. They will understand how to leverage AI to enhance human strengths, particularly in abstract thinking. Success will not come from a competition between humans and machines, but rather from their collaboration.
To illustrate this, consider a symphony orchestra: the instruments do not compete; they complement one another. The violin doesn’t attempt to sound like a trumpet; instead, it plays its unique part, contributing to a harmonious whole that is greater than the sum of its components. Similarly, the future will belong to organizations that grasp the importance of a proper symbiosis between human and artificial intelligence. For example:
Our takeaway: The machines can analyze, extrapolate, and generate. They are much faster, even affordable and relentless. But they are not built on experiences, and organic senses that interact in multiple ways: they cannot dream, they cannot mourn, they cannot wonder “what if?” in the way a parent does when tucking in a child, or an artist when staring at a blank canvas because they would rely on training data and generalizations for that.
Abstract thinking—the power to conceive the invisible, to imagine the implausible, to weigh what cannot be calculated—remains uniquely human. Finding cultural explanations for facts will take us milliseconds. And that is our gift.