Skip to main content
Daniel J Glover
Back to Blog

AI pragmatism: From hype to real value

10 min read

The party is not over, but the industry is starting to sober up.

That observation from TechCrunch captures the mood shift defining enterprise AI strategy in 2026. After two years of breathless announcements, billion-dollar valuations, and demos that promised to revolutionise everything, organisations are now asking a different question: where is the return on investment?

What is AI pragmatism?

AI pragmatism is the enterprise shift from chasing AI capabilities to demanding measurable ROI. It means right-sizing models for specific tasks, building internal AI factories for repeatable deployment, and preparing for market volatility rather than betting everything on the largest model available. For IT leaders, it represents a mature approach that prioritises business outcomes over technological novelty.

This article examines the key trends reshaping enterprise AI adoption and provides a practical framework for IT leaders navigating this transition. You will learn why scaling laws are hitting limits, how smaller models are outperforming giants for specific tasks, and what the looming AI bubble deflation means for your technology investments. For a broader look at how AI is reshaping the entire software development lifecycle, see AI is eating software.


The End of the Scaling Era

For years, the AI industry operated on a simple belief: bigger models mean better results. OpenAI's GPT-3 in 2020 proved that scaling a model 100 times larger could unlock emergent capabilities like coding and reasoning without explicit training. This launched what Kian Katanforoosh, CEO of AI platform Workera, calls the "age of scaling" - a period defined by the assumption that more compute, more data, and larger transformer architectures would inevitably drive breakthroughs.

That assumption is now being challenged.

Ilya Sutskever, co-founder of OpenAI, recently stated that current models are plateauing and pretraining results have flattened. Yann LeCun, who left Meta to launch his own AI startup, has long argued against overreliance on scaling and stressed the need for better architectures.

EraPrimary ApproachKey Belief
2012-2020Architecture innovationBetter designs unlock new capabilities
2020-2025Scaling lawsLarger models automatically improve
2026+Pragmatic deploymentRight-sized solutions for specific problems

For IT leaders, this shift has immediate implications. The race to deploy the largest possible model is giving way to a more nuanced approach: matching model capabilities to business requirements.

Small Language Models Are Winning Enterprise Deployments

The most significant trend for enterprise adoption is the rise of small language models (SLMs). These focused, fine-tuned models are proving that less can indeed be more.

Andy Markus, AT&T's Chief Data Officer, told TechCrunch that "fine-tuned SLMs will be the big trend and become a staple used by mature AI enterprises in 2026." The reason is straightforward: when properly fine-tuned for specific business applications, smaller models match larger generalised models in accuracy whilst delivering superior cost efficiency and speed.

French AI startup Mistral has demonstrated that its small models outperform larger competitors on several benchmarks after fine-tuning. This challenges the assumption that enterprises need frontier models for every use case.

FactorLarge Language ModelsSmall Language Models
Inference costHigh (cloud-dependent)Low (can run on-premises)
LatencyVariable, often slowerConsistent, faster response
Fine-tuning effortExpensive, complexAccessible, affordable
Domain accuracyGood general knowledgeExcellent after fine-tuning
Data privacyData leaves organisationCan run locally
Deployment flexibilityCloud-firstEdge and on-premises capable

Jon Knisley, an AI strategist at enterprise AI company ABBYY, notes that "the efficiency, cost-effectiveness, and adaptability of SLMs make them ideal for tailored applications where precision is paramount." This is particularly relevant for edge computing scenarios where processing must happen locally.

Practical Implications for Your AI Roadmap

If your organisation is planning AI investments, consider these questions:

  • Are you defaulting to frontier models when a fine-tuned smaller model would suffice?
  • Have you calculated the total cost of ownership including inference costs at scale?
  • Could on-premises deployment of smaller models address data sovereignty concerns?
  • Are you building internal capability to fine-tune models for your specific domain?

The AI Bubble Will Deflate - Plan Accordingly

MIT Sloan Management Review columnists Thomas Davenport and Randy Bean have issued a stark warning: the AI bubble will deflate, and organisations should prepare for the economic consequences.

The parallels to the dot-com era are striking. Sky-high startup valuations, emphasis on growth over profits, massive infrastructure buildout, and relentless media hype - we have seen this pattern before. The trigger could be a bad quarter from a major vendor, a breakthrough from a lower-cost competitor, or simply a few large enterprises pulling back on AI spending.

The authors subscribe to Amara's Law applied to AI: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."

For IT leaders, this means:

Short-term caution: Avoid overcommitting to vendors whose valuations depend on continued hypergrowth. Negotiate flexible contracts that do not lock you into long-term commitments with companies that may not exist in their current form.

Long-term confidence: AI remains a transformative technology. The bubble deflating does not mean AI loses value - it means valuations return to reality. Continue building internal capabilities and infrastructure.

Vendor diversification: Do not bet everything on a single AI provider. The landscape is shifting rapidly, and today's leader may be tomorrow's cautionary tale.

Agentic AI Gets Practical with MCP

AI agents - autonomous systems that can take actions on behalf of users - failed to live up to their hype in 2025. The primary reason was connectivity: agents struggled to access the tools, databases, and APIs where actual work happens.

Anthropic's Model Context Protocol (MCP) is changing this. Described as "USB-C for AI," MCP provides a standardised way for agents to communicate with external systems. The protocol has rapidly gained adoption, with OpenAI, Microsoft, and Google all embracing it. Anthropic has donated MCP to the Linux Foundation's Agentic AI Foundation, signalling its emergence as an industry standard.

Rajeev Dham, a partner at Sapphire Ventures, predicts that these advancements will lead to agent-first solutions taking on "system-of-record roles" across industries.

What This Means for Enterprise Architecture

Your enterprise architecture team should be evaluating:

  • MCP readiness: Are your internal systems prepared to expose capabilities through MCP-compatible interfaces?
  • Security boundaries: How will you govern what agents can access and what actions they can perform? A robust AI governance framework is essential before deploying autonomous agents.
  • Audit trails: Agentic actions need logging and accountability mechanisms.
  • Human oversight: Where do you require human approval before agent actions execute?

Building Your AI Factory

Companies that are "all in" on AI as a competitive advantage are creating what MIT Sloan calls "AI factories" - combinations of technology platforms, methods, data, and previously developed algorithms that accelerate AI development.

Leading banks pioneered this approach years ago. BBVA opened its AI factory in 2019; JPMorgan Chase created OmniAI in 2020. Consumer products company Procter & Gamble and software company Intuit have followed, with Intuit calling its platform GenOS - a generative AI operating system for the business.

Organisations without this internal infrastructure force their data scientists to repeatedly solve the same foundational problems: which tools to use, what data is available, which methods to employ. This makes AI development expensive and slow.

AI Factory ComponentPurposeExample
Model registryCatalogue available models and their capabilitiesMLflow, Weights & Biases
Feature storeReusable data transformations and featuresFeast, Tecton
Prompt libraryTested, versioned prompts for common tasksInternal documentation
Evaluation frameworkConsistent measurement of model performanceCustom benchmarks
Governance layerPolicy enforcement and complianceIntegrated approval workflows

A Framework for Pragmatic AI Investment

Based on these trends, here is a framework for IT leaders evaluating AI investments in 2026:

Start with the Problem, Not the Technology

The hype cycle encouraged organisations to ask "How can we use AI?" The pragmatic approach asks "What business problem needs solving, and is AI the right tool?"

Right-Size Your Models

Resist the assumption that bigger is better. Evaluate whether a fine-tuned small model could deliver equivalent results at a fraction of the cost. Consider where on-premises deployment addresses data privacy or latency requirements.

Build Infrastructure Before Scaling Use Cases

Invest in your AI factory - the platforms, data pipelines, and governance frameworks that enable rapid, repeatable AI development. This infrastructure compounds in value as you deploy more use cases. If you are unsure where to start, a structured AI enablement strategy can help prioritise your investments.

Prepare for Market Volatility

The AI vendor landscape will consolidate. Build flexibility into contracts and maintain optionality across providers. Develop internal skills that transfer across platforms.

Embrace Standardisation

MCP and similar standards reduce integration friction and future-proof your investments. Prioritise vendors and approaches aligned with emerging industry standards.


The Path Forward

The transition from AI hype to AI pragmatism is not a retreat - it is a maturation. Organisations that built sustainable AI capabilities during the hype cycle are now positioned to accelerate. Those that chased demos and announcements may find themselves with expensive experiments that never reached production.

The winners in 2026 will be IT leaders who combine technical understanding with business discipline. They will deploy smaller models where appropriate, build reusable infrastructure, and maintain the flexibility to adapt as the market inevitably shifts. For a phased approach to building AI capabilities, consider starting with high-value, low-risk use cases before scaling.

The party may be sobering up, but the real work - and the real value - is just beginning.


AI Investment Checklist for 2026

Use this checklist when evaluating AI initiatives:

Before Committing Budget:

  • [ ] Have we defined a specific business problem, not just "use AI"?
  • [ ] Have we evaluated smaller, fine-tuned models against frontier models?
  • [ ] Do we understand the total cost of ownership including inference at scale?
  • [ ] Is our data ready - clean, accessible, and governed?

Vendor and Architecture:

  • [ ] Are contracts flexible enough to survive vendor consolidation?
  • [ ] Do we have optionality across multiple AI providers?
  • [ ] Are we aligned with emerging standards like MCP?
  • [ ] Can we run models on-premises if data sovereignty requires it?

Governance and Risk:

  • [ ] Do we have AI governance policies in place?
  • [ ] Are audit trails and accountability mechanisms defined?
  • [ ] Have we identified where human oversight is required?
  • [ ] Is there a plan for the AI bubble deflating?

Infrastructure:

  • [ ] Are we building reusable AI infrastructure (model registry, feature store)?
  • [ ] Do we have internal capability to fine-tune and evaluate models?
  • [ ] Can we measure ROI consistently across AI initiatives?

Sources

Share this post

DG

Daniel J Glover

IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.

Let's Work Together

Need expert IT consulting? Let's discuss how I can help your organisation.

Get in Touch