The AI consolidation is coming: Gartner's latest reports validate the shift from LLM-Building to agentic workflows

Written by Manuel Herranz | 06/08/25

Gartner's enterprise AI research confirms what we've been saying about the future of AI business models:  multilingual agentic workflows are where the real opportunity lies.

The artificial intelligence landscape is at an inflection point that resembles the early days of electricity generation remarkably closely. Just as countless small power companies eventually gave way to standardized grids and utility models, Gartner's latest research confirms that we're heading toward a significant consolidation in AI – one that is about to reshape who fundamentally wins and loses in this space because unexpected players from several regions will emerge as regional, culturally-relevant champions.

The great AI consolidation: Gartner's 75% prediction

According to Gartner's "Critical Insights" report, by 2029, the GenAI technology landscape is expected to consolidate into 75% fewer players as hyperscalers and SaaS platform vendors expand and absorb hybrid cloud vendors. It's not market speculation – it's the inevitable consequence of economic forces that are already reshaping the industry.

The parallels to historical infrastructure developments from the past are striking. Gartner identifies that we're moving from a period of "vendor fragmentation" to consolidation through acquisitions and market failures. Just as the electricity industry evolved from thousands of local generators to a handful of major utilities, AI is following the same path.

Why LLM-building is a losing game (for most)

Gartner's research validates what we've observed: the current focus on building ever-larger language models is economically unsustainable for most players. Some governments (the EU with their EuroLLM and OpenEuroLLM, Spain with the latest Salamandra models from Barcelona Supercomputing Center, Saudi Arabia with her ALLaM-2-7B, even in Africa with small language model InkubaLM trained for the five African languages: IsiZulu, Yoruba, Hausa, Swahili, and IsiXhosa) are catching up - but they are not betting on the power-hungry, mega models. The emphasis is on smaller, portable models that excel in GenAI, specifically private GenAI, and that generate language or undertake language tasks with a high degree of confidence. Mozilla has carried our a very analysis of uncustomized local LLMs (7B, 12B, up to 33B) that most companies can afford to run locally. Here is an analysis of translation tasks, a comparison between Gemma3:12B and DeepSeek R1:14B

In Mozilla's analysis, the models evaluated in the WMT24++ paper demonstrated BLEU scores in the range 27 to 32. As for COMET scores, the Opus-MT dashboard indicates a range of 48 to 76 for English to German translation, which varies from one dataset to another. The local LLMs Mozilla explored were 4-bit quantized models, which also potentially explains the reduction in performance. While the local LLMs certainly had room for improvement, they showed the way and Mozilla noted that for certain use cases, the data privacy and offline use benefits may outweigh small performance trade-offs.

Gartner's "Current State of AI Agents" report reveals a crucial insight – current LLM-based AI agents have "low levels of agency". They are essentially "advanced forms of AI assistants with additional features such as tool use, but not really AI agents."

This explains why even billion-dollar investments in LLM development are struggling to show sustainable returns. The models themselves are becoming commoditized infrastructure, heavily subsidized and burning cash. Even impressive developments, such as China's DeepSeek or Europe's Brussels-sponsored university projects, are following this same capital-intensive, low-margin pattern.

The real opportunity: Automated outcomes, not tools

Gartner's "Rise of the Machines" report identifies the fundamental shift that's already underway: by 2028, 30% of what B2B software tools provide today will be replaced by providers that deliver end-to-end business outcomes in the form of AI-automated services.

This transformation represents a complete reimagining of how AI creates value. Traditional SaaS tools are essentially digital Swiss Army knives – they provide capabilities, but require significant human expertise, time, and ongoing management to deliver actual business results. You buy a CRM tool, but you still need teams to configure it, train users, maintain data quality, and interpret the outputs to drive decisions.

Automated outcome services flip this model entirely. Instead of selling you translation software, imagine a service that simply ensures your global customer communications are perfectly localized – handling everything from initial content analysis to cultural adaptation to quality assurance to delivery, all in real-time. You don't manage the process; you simply receive the outcome.

This is the infrastructure play we've been anticipating. Instead of selling AI "tools" that require extensive implementation and maintenance, the winners will provide automated, outcome-based services that businesses can plug into – just like connecting to the electrical grid.

The economics of outcomes vs. tools

The report highlights that top AI companies are reaching $5 million in annualized recurring revenue 13 months faster than their SaaS counterparts, precisely because they're focused on delivering outcomes rather than tools. This acceleration comes from several economic advantages:

  1. Reduced Implementation Friction: Traditional AI tools require months of integration, training, and fine-tuning. Outcome-based services can be deployed immediately because the complexity is handled by the provider, not the customer.
  2. Pay-for-Value Pricing: Instead of paying for software licenses whether you use them effectively or not, outcome-based services align pricing with actual business value delivered. Gartner notes that "usage- and outcome-based billing further reduces adoption risk, as companies only pay for what they need, when they need it."
  3. Scalability Without Complexity: When a business grows globally, traditional tools require additional licenses, training, and management overhead. Outcome-based services scale automatically – if you need customer support in 15 new languages overnight, the service simply expands to meet that need.

The strategic infrastructure play

This is the infrastructure play we've been anticipating. Instead of selling AI "tools" that require extensive implementation and maintenance, the winners will provide automated, outcome-based services that businesses can plug into – just like connecting to the electrical grid.

When you flip a light switch, you don't think about power generation, transmission infrastructure, or grid management. You simply expect light. Similarly, when businesses need customer communications in 47 languages, they shouldn't need to think about translation models, workflow management, or quality assurance protocols. They should simply receive perfectly localized communications.

This shift creates massive competitive advantages for early movers because it fundamentally changes customer relationships. Tool vendors compete on features and pricing. Outcome providers become essential infrastructure that's difficult to replace.

The role of LLMs in this new paradigm

Is this the end of LLMs as we know it? I don't think so, but their role is fundamentally changing. These models will have a place as components within larger automated systems rather than standalone products. They're becoming specialized tools within outcome-delivery infrastructures – like motors in appliances rather than products you buy separately.

LLMs excel as assistants for drafting reports, emails, and conducting research. But we are increasingly aware of their limitations: their generative nature will always carry the risk of producing hallucinations that, in fields like law, can have embarrassing consequences (such as quoting non-existent legislation, as we've seen in several high-profile cases).

This is precisely why the outcome-based model is superior: it wraps LLMs within systems that include fact-checking, verification, human oversight, and quality assurance. When a legal research service delivers case analysis, it's not just running queries through GPT-4 – it's using LLMs as one component in a comprehensive process that includes legal database verification, precedent checking, and expert review.

Agentic workflows: Where language meets automation

Here's where the convergence becomes clear. Gartner predicts that by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI, up from 0% today. But their research also reveals that current "AI Agent 1.0" solutions struggle with "enterprise-contextualized decision making."

The breakthrough will come through multisystem AI agents that can operate across diverse enterprise environments, and this is where language infrastructure becomes critical. These agents need to:

  • Access fragmented enterprise data across multiple languages and formats
  • Maintain context and meaning across linguistic and cultural boundaries
  • Operate in real-time across global, multilingual workflows
  • Integrate with existing business systems regardless of their language or regional configuration

The multilingual advantage: Language as infrastructure

What Gartner's reports don't fully address – but what our experience in language operations makes clear – is that language diversity will be a key differentiator in agentic workflows. As businesses become more global and agents become more autonomous, the ability to seamlessly operate across languages is no longer just a feature – it has become foundational infrastructure.

Consider Gartner's finding that enterprises need "data interoperability" and "hybrid cloud collaboration" for effective AI agents. In a global economy, this data exists in dozens of languages, with cultural contexts that current large language models (LLMs) struggle to navigate. The companies that solve this multilingual automation challenge will own a crucial piece of the new AI infrastructure stack.

From hype to sustainable business models

Gartner's research confirms what we've observed in our own enterprise deployments: the market is shifting from AI experimentation to demanding clear outcomes and defined ROIs. Their surveys show that 59% of buyers expect to increase their spending on AI services by almost 20%, but they want solutions that "reduce costs and headcount, mitigate risk, reduce complexity and drive business growth."

This isn't about having the latest LLM or the most parameters – it's about delivering reliable, measurable business value. In a global economy, that value increasingly depends on seamless, multilingual operations.

The Langops opportunity

As the AI industry consolidates around infrastructure and outcomes rather than model-building, Language Operations (LangOps) emerges as a critical layer in the new stack. Just as DevOps became essential for software deployment, LangOps will be essential for global AI deployment.

The companies that survive the coming consolidation won't be those with the biggest models or the most venture funding. They'll be those that solved real business problems with sustainable economic models – and in our interconnected world, that increasingly means solving them in every language that matters to your business.

Gartner's research points to a future where AI becomes as ubiquitous and reliable as electricity. When that happens, the value won't be in generating the power – it'll be in enabling everything that power makes possible. For global businesses, this means multilingual, agile workflows that can automate processes across languages, cultures, and contexts.

The question isn't whether AI will consolidate – Gartner's research shows it's already happening. The question is whether your organization will be positioned to thrive in the infrastructure layer that emerges, or whether you'll be one of the 75% that doesn't make it through the transition.

Pangeanic has been building multilingual AI infrastructure and solutions for over two decades, long before it became fashionable. As the industry consolidates around sustainable business models and agentic workflows, we're positioned to help enterprises navigate the transition from AI experimentation to multilingual automation at scale.