Try ECO LLM Try ECO Translate
Featured Image

6 min read

28/05/2025

Why abstract thinking is AI's insurmountable wall

Why abstract thinking is AI's insurmountable wall
13:09

In an era where AI can write code, translate languages, and even compose poetry, there remains one cognitive domain that is exclusively human: abstract thinking. Instead of fearing or mocking the generative errors or hallucinations produced by early versions of this technology, we should embrace it as a powerful tool that enhances human potential. 

These generative errors and "hallucinations" are not to be feared; rather, they should be recognized as extensions of ourselves and as an unprecedented means to amplify human thought, provided they are used wisely.

The great AI paradox 

We're witnessing something fascinating in 2025. OpenAI's latest models hallucinate up to 79% of the time on reasoning tasks (admittedly, a third of the time). Anthropic's CEO admits, "We do not understand how our own AI creations work." We're simultaneously entering an age where these same systems can process millions of documents, write sophisticated code, and navigate complex linguistic relationships with unprecedented speed. 

This paradox reveals a fundamental truth: AI excels at pattern recognition and statistical processing, but it cannot truly think in the abstract sense that defines human cognition. 

What makes abstract thinking uniquely human 

Abstract thinking—our ability to conceptualize intangible ideas like freedom, justice, humour, or love—emerges from something AI cannot replicate: mammalian consciousness's organic, multisensorial nature. When you contemplate the concept of "wisdom" or envision a hypothetical future scenario, you're drawing from a lifetime of embodied experiences, emotional memories, and sensory inputs that no statistical model can truly simulate. 

Consider how abstract thinking manifests in your daily life in several scenarios where no AI is capable of “deeply helping you”: 

  • Pattern recognition beyond data: Seeing the more profound meaning in a friend's behaviour change 
  • Hypothetical reasoning: Imagining "what if" scenarios that have never occurred, but “sensing” that one possibility is more likely to happen than another 
  • Metaphorical understanding: Grasping why we say "time is money" or "love is a battlefield" and contextual sentences with a general sense, which may be even ironic sometimes. 
  • Ethical reasoning: Weighing moral implications that exist beyond any training dataset 

As the source research shows, abstract thinking involves "ideas and principles that are often symbolic or hypothetical"—concepts that require the kind of contextual, experiential understanding that emerges from living in a physical, social, and emotional world. 

Agentic AI (see our ECOChat) is recreating our judgment, hesitations, and thought processes, but mechanically and only with data. Humans don’t work like that, and even though statistics and data tell ML that the translation of a sentence is such, “we know” that in context,  the translation is a different one. That’s something that speakers of highly contextual languages like Japanese know very well: it’s not that the translation memory was wrong; the translation was right, but it was contextually wrong. 

The augmentation paradigm: The Transformers are tools, not competitors 

Nobody tries to compete with a calculator in arithmetic or challenge a spreadsheet in data organization. These tools don’t diminish human mathematical thinking—they liberate it. Similarly, we must view AI tools as the ultimate intellectual amplifier, not a cognitive competitor. 

For the non-initiated, let’s recall that today's AI tools are based on large amounts of monolingual language model training datasets, speech systems with speech datasets, and so on. The algorithms behind the training methods are generally based on the Transformer technology, which works very well as an encoder-decoder. Transformers technology is an advanced method used in artificial intelligence that helps machines understand and generate human language efficiently. Imagine reading a book: rather than processing words strictly one after another, Transformers enable the system to examine all words simultaneously, capturing relationships and context, much like how our brains instantly connect meanings and ideas when reading a sentence. 

Transformers represent a revolutionary breakthrough because they solved a fundamental bottleneck that plagued earlier AI systems. Before Transformers, AI had to process language sequentially—like reading word by word with a finger—which was slow and caused the system to "forget" important context from earlier in long texts. You may notice this still happens when prompts are too long, with too many instructions. The Transformer's "attention mechanism" is like having a photographic memory that can instantly recall and weigh the importance of every word in relation to every other word, no matter how far apart they are, but the longer and the more complex the prompt, the higher the chances are that the LLM will perform the last requests better than the initial ones. 

The encoder-decoder design functions like an advanced translation system: the encoder acts as a diligent listener, absorbing and mapping out the relationships within the input text to create a comprehensive internal representation. The decoder, in turn, serves as an articulate speaker, generating appropriate responses word by word while continually referencing both the original input and the responses it has already produced. What makes Transformers truly remarkable is their ability to scale. The more data and computing power you provide, the more sophisticated their understanding becomes. They learn by analyzing billions of text examples, uncovering subtle patterns in language, from basic grammar to complex reasoning, cultural references, and even creative expression. This scalability is why the same basic transformer architecture can be utilized for a wide range of applications, including language translation, code generation, creative writing, and scientific research assistance.

My point is then, why compete with machines at scaling? Let’s use them as tools for what they are: 

  • Language processing at scale: Analyzing relationships between concepts across massive datasets 
  • Research acceleration: Scanning millions of documents to surface relevant information 
  • Summarization: Drafting short  abstracts from very long documents 
  • Sentence completion: Helping us draft emails or texts with some reference information 
  • Code generation: Writing functional software from natural language descriptions, adapting old code to new code, etc. 
  • Pattern detection: Identifying trends and connections humans might miss in hundreds of books, lessons, articles, etc. 

With human oversight, these capabilities make us humans orders of magnitude more productive. Expert professionals who understand their domains can now leverage AI to handle the computationally heavy lifting while focusing their uniquely human cognitive resources on the abstract challenges that matter most. 

The experience gap: Why youth struggle and experts thrive 

The programming job market offers a telling case study. Young developers, lacking deep experience, often over-rely on AI-generated code without understanding its implications. They're competing with the tool rather than directing it. Meanwhile, seasoned developers use AI as an incredibly powerful assistant, applying their abstract understanding of software architecture, user needs, and system design to guide and refine AI outputs. 

This pattern repeats across industries: those with strong foundational knowledge and abstract thinking skills leverage AI most effectively, while those without that cognitive foundation struggle to differentiate between useful and problematic AI outputs. 

May 27, 2025, 10_37_26 AM

The technical reality: Statistics vs. consciousness 

Even as researchers attempt to address AI limitations through Knowledge Graphs and Retrieval-Augmented Generation (RAG) systems—techniques that structure information to reduce hallucinations—these approaches still operate within the fundamental constraints of statistical processing. Knowledge graphs can represent relationships between concepts, and RAG can provide more reliable information retrieval, but neither can replicate the experiential, embodied cognition that enables true abstract thinking. 

An "electronic brain" processes probabilities and correlations. A human brain integrates sensory experiences, emotional responses, social context, and imaginative projection into genuinely novel conceptual frameworks. The difference between our organic brains and “electronic brains” is not simply architectural, it's deeply ontological. 

Let’s learn to embrace symbiosis 

As we advance toward agentic AI workflows capable of automating complex processes across multiple languages and domains, organizations that will thrive are those that recognize the ideal division of labour between human and artificial intelligence. They will understand how to leverage AI to enhance human strengths, particularly in abstract thinking. Success will not come from a competition between humans and machines, but rather from their collaboration. 

To illustrate this, consider a symphony orchestra: the instruments do not compete; they complement one another. The violin doesn’t attempt to sound like a trumpet; instead, it plays its unique part, contributing to a harmonious whole that is greater than the sum of its components. Similarly, the future will belong to organizations that grasp the importance of a proper symbiosis between human and artificial intelligence. For example:

  • Use AI for computational tasks while reserving strategic thinking for humans. Let machines crunch the numbers, process the data, and handle the routine analysis. We have already used software to do many of these things for decades. But when it comes to deciding what those numbers mean for your business strategy, when it's time to pivot based on market signals, or when you need to decide with incomplete information, that's where human strategic thinking must come in. AI can tell you that customer satisfaction scores have dropped 15% in the Southeast region; only human insight can connect that to the cultural shift happening in that market and devise a response that resonates with real people. 
  • Leverage AI's ability to process vast amounts of information, but rely on human judgment for interpretation and application. AI excels at finding patterns in massive datasets that would take humans years to analyze. It can scan and OCR thousands of research papers, customer reviews, or market reports in minutes. But pattern recognition isn't wisdom. When an AI model identifies that certain phrases in customer feedback correlate with churn risk, it takes human understanding to recognize that customers aren't just reporting problems—they're expressing a loss of trust that requires a fundamentally different response than a technical fix. 
  • Deploy AI for pattern recognition while trusting human insight for meaning-making. Machines are exceptional at identifying what happened and even predicting what might happen next based on historical patterns. But they struggle with the question that matters most: what does it mean? When an AI system detects an unusual spike in social media mentions for your brand, human insight is needed to understand whether it's a crisis brewing, a viral moment to capitalize on, or simply noise that will fade away. 
  • Implement AI for efficiency while maintaining human oversight for quality and ethics. AI can dramatically accelerate workflows, from content creation to customer service to financial analysis. But efficiency without wisdom can be dangerous. It takes human judgment to recognize when an AI-generated customer service response, while technically accurate, might come across as tone-deaf. It requires human oversight to ensure that an AI system optimizing for engagement doesn't inadvertently amplify harmful content or discriminate against certain groups.  

And finally: Our minds are not just models 

Our takeaway: The machines can analyze, extrapolate, and generate. They are much faster, even affordable and relentless. But they are not built on experiences, and organic senses that interact in multiple ways: they cannot dream, they cannot mourn, they cannot wonder “what if?” in the way a parent does when tucking in a child, or an artist when staring at a blank canvas because they would rely on training data and generalizations for that. 

Abstract thinking—the power to conceive the invisible, to imagine the implausible, to weigh what cannot be calculated—remains uniquely human. Finding cultural explanations for facts will take us milliseconds. And that is our gift.