What Everyone Got Wrong About AI in 2025

Whilst industry discourse fixates on what's next, enterprise leaders are grappling with what went wrong. From endless proof-of-concept cycles to the mythology of scale, 2025 revealed critical missteps that turned promising AI initiatives into costly lessons. Five patterns—drawn from practitioners across banking, technology, and research—that defined why so many AI projects stalled, and what actually needs to change.

Share

2025 was supposed to be the year enterprise AI graduated from experimentation to transformation. Boards approved budgets. Executives set timelines. Technology leaders launched initiatives across customer service, software development, and operations.

Yet as the year closes, a different pattern has emerged: a widening gap between AI capability and enterprise readiness.

The bottleneck wasn’t what AI could do—it was how organisations approached deployment, measured progress, and understood what “readiness” actually required. The result? Proof-of-concept purgatory. Capability without behaviour change. Scale mistaken for progress.

This isn’t a story about what’s coming in 2026. It’s an accounting of what went wrong in 2025—drawn from enterprise leaders, researchers, and practitioners who watched promising initiatives stall, not because the technology failed, but because the strategy did.

THEME 1: The Proof-of-Concept Trap—Mistaking Activity for Progress

The most visible failure of 2025 wasn’t a collapsed initiative. It was the normalisation of perpetual piloting—organisations running dozens of proofs-of-concept whilst failing to ship a single production system at scale.

“They mistook Proof of Concept activity for progress,” says Prasad Prabhakaran, Head of Artificial Intelligence at esynergy. “Many PoCs were driven by peer pressure and tooling excitement, not by a clearly defined business problem, value stream, or operating model. Without agentic engineering fundamentals or change in how work actually happens, AI stayed as demos on the side rather than intelligence embedded into the business.”

Alina Timofeeva, Keynote Speaker at TED Conferences and Influence Board Member at BCS, The Chartered Institute for IT, captures the paradox: “AI adoption is high. But AI maturity is not. Most organisations are still stuck in pilot mode: budgets are rising, teams are experimenting, vendors are selling copilots and solutions.” According to an MIT report, 95% of organisations fail to capture ROI from AI—a damning statistic that reveals the gap between activity and value.

McKinsey’s 2025 State of AI report found that whilst 23% of organisations report scaling AI, the vast majority remain in proof-of-concept purgatory—succeeding at demonstration, failing at integration.

ALSO READ: Are You Ready for Embedded AI? Four Tests of Enterprise Maturity in the Post-Experimental Phase

Rohit Dhawan, Group Head of AI & Advanced Analytics at Lloyds Banking Group, describes the pattern: “A lot of people rushed into delivering GenAI pilots without thinking about how they will scale or without considering how they can be used as building blocks for a whole-of-process or domain reimagination.”

The rush to pilot wasn’t driven by strategic clarity. It was driven by FOMO, vendor marketing, and the belief that experimentation itself constituted progress. But pilots consume budget and engineering time. When they don’t graduate to production, they create pilot fatigue—teams lose faith that AI will ever move beyond demos.

Prabhakaran’s prediction for 2026: “The endless PoC cycle will quietly die. As budgets tighten and boards demand outcomes, experimentation without transformation will lose patience. Enterprises will return to fundamentals: designing AI into business processes, redefining target operating models with AI at the core, and engineering intelligence as part of day-to-day operations rather than as isolated experiments.”

THEME 2: The Horizontal Trap—Platformising Before Proving Value

Another strategic misstep: building horizontal AI platforms before demonstrating vertical wins.

Rakesh Ranjan, Director of IBM Software at IBM, identifies this as a core failure mode: “They built horizontally when the org needed vertical wins. Their instinct was to platformise early: agents, frameworks, shared services, reuse, extensibility. From an architecture perspective, that is correct. From an enterprise change perspective, it diluted the perceived impact.”

The logic seemed sound. Rather than solving one narrow problem, organisations would build reusable AI infrastructure. The problem? Executives don’t fund infrastructure. They fund outcomes.

“Executives respond to end-to-end stories: ‘This class of cases is now resolved 30% faster with fewer escalations and higher CSAT.’ Instead, they often saw slices: better RCA, better logs, better suggestions,” Ranjan explains.

When AI teams spent 2025 building platforms, they delivered pieces of value distributed across many workflows, rather than concentrated, measurable impact in one critical area. The value was real but diffuse—and diffuse value is hard to defend when budgets tighten.

The alternative—vertical-first deployment—focuses on solving one end-to-end problem completely before expanding. Pick a single, high-value workflow, deploy AI, prove ROI, demonstrate transformation, and then extract horizontal components for reuse.

The lesson: Enterprises needed fewer AI platforms and more AI-transformed processes.

THEME 3: The People Problem—Capability Without Behaviour Change

The most damaging assumption of 2025: “Once the tools are good enough, adoption will follow.”

It didn’t.

Ranjan describes the misstep: “Leaders assumed that once tools were good enough, behaviour would follow. In practice, behaviour follows incentives, muscle memory, and manager reinforcement—not tool quality.”

Organisations invested heavily in AI capability—models, infrastructure, tooling. They underinvested in behaviour design—the incentives, workflows, and manager training required to make people actually use AI differently.

Timofeeva quantifies the imbalance precisely: “AI transformation is only 10% technology; 20% data and 70% change management. Yet very few enterprises have industrialised delivery at scale—with the operating model, data foundations, governance, and change management capabilities required to make AI durable.”

That 70% figure—change management—is where most organisations failed in 2025. They optimised for the 10% (technology) and underinvested in the majority of what actually determines success.

Ranjan identifies the specific failure point: the middle layer. “Leaders underestimated middle-layer drag. Their strategy engaged two extremes very well: senior leadership, who loved the narrative and metrics, and individual engineers, who loved the idea of AI assistance. The weak link was the middle layer: frontline managers, escalation leaders, and process owners.”

These middle managers quietly control which tools are “safe” to use, whether AI output is treated as assistive or risky, and how performance is measured. When they weren’t brought into the AI strategy, they became passive resistors.

ALSO READ: Relearning Work: Growing Human Potential in the AI Age

Dr. Adio-Adet Dinika, Research Fellow at The Distributed AI Research Institute (DAIR), frames this as a deeper governance failure: “What enterprise tech leaders got wrong in 2025 was treating AI failure as a technical problem rather than a political and economic one. Leaders focused on model capability and scale whilst ignoring power: who controls the systems, who bears the risk, and who absorbs harm when things go wrong.”

AI wasn’t rolled out with procedural legitimacy—meaningful worker participation, recourse mechanisms, or governance structures capable of saying “no.” “The result wasn’t just underperforming systems; it was illegitimate deployment. Enterprises mistook scale for progress and novelty for inevitability,” Dinika adds.

Timofeeva’s conclusion reinforces this: “Transformation isn’t about technology alone. It’s about mindset, culture, and people—and whether you can turn AI from a set of pilots into sustainable change.”

The lesson: Technology adoption is a behaviour change problem, not a capability problem.

THEME 4: The Foundation Failure—Building on Unstable Ground

Beneath these failures sits a structural issue: organisations deployed AI on top of broken data foundations.

Dhawan is blunt: “You need solid foundation in data quality, governance, and scalable integration as enablers for GenAI and agentic AI. Many leaders treated GenAI as a plug-and-play solution, which often leads to failure.”

When data was incomplete, inconsistent, outdated, or biased, AI amplified those flaws at scale. “This wasn’t just about overhype; it stemmed from underestimating the complexities of bad data inputs perpetuating biases, high implementation costs, and a lack of cross-functional alignment. For instance, historical biases in training data amplified inequities in sectors like healthcare and finance, turning promising projects into costly liabilities,” Dhawan continues.

A 2025 Databricks study found that 68% of enterprise AI initiatives cite data quality as a top-three blocker, yet investment in data infrastructure continues to lag investment in AI tooling.

The result? AI systems that hallucinated because retrieval pulled from outdated documents, made biased decisions because training data encoded historical inequities, and failed compliance audits because data lineage couldn’t be traced.

Dhawan’s prescription: “The route forward? Shift to hybrid strategies that prioritise robust data pipelines, ethical AI frameworks, and iterative scaling—focusing on measurable ROI rather than shiny demos.”

The lesson: AI doesn’t fix broken data infrastructure—it exposes it.

ALSO READ: Is Your Enterprise Data Stack Ready for Agentic AI? 10 Signs to Check

THEME 5: The Scale Mythology—Bigger Isn’t Always Better

Perhaps the most widespread misstep was the belief that AI progress equals scale—bigger models, more parameters, larger infrastructure investments.

Dinika identifies this as a fading mythology: “The AI trend that will quietly die in 2026 is the belief that bigger is automatically better. Large models won’t disappear, but the mythology around them is eroding. Enterprises are encountering the real costs: opacity, environmental impact, auditability problems, and deep vendor dependency.”

Throughout 2025, organisations chased state-of-the-art rather than state-of-appropriate. In practice, many discovered that smaller, specialised models outperformed frontier models on narrow tasks, inference costs for massive models made production deployment economically unsustainable, and opacity created governance risks.

Patrick Philips, Senior Vice President at Sedgwick, predicts the correction: “I think next year, LLMs getting bigger and bigger as a method of showing value will die. You will see models not talking about how many parameters they have, and moving towards more compound-type growth.”

Dhawan sees a related shift: “Retrieval-Augmented Generation (RAG) as a standalone crutch for AI accuracy will quietly fade. While it helped mitigate hallucinations in LLMs by pulling in external data, it’s already showing cracks as more advanced agentic systems with built-in memory and multi-step reasoning take over.”

The mythology of scale extended beyond models to deployment strategy. Organisations assumed that rolling out AI broadly would accelerate adoption. Instead, it created coordination complexity and diffuse accountability.

The lesson: Progress wasn’t defined by which organisations deployed the biggest models, but by which deployed the right-sized solutions with the governance to sustain them.

Conclusion: What Actually Needs to Change

The missteps of 2025 weren’t failures of technology. They were failures of strategy, sequencing, and organisational design.

The organisations that struggled didn’t lack access to capable models or sufficient budgets. They lacked clarity on which problems justified AI investment, operational readiness to absorb workflow changes, data foundations solid enough to support production systems, behaviour design that changed incentives, and vertical wins before platformising horizontally.

Timofeeva frames the strategic question for 2026: “If you’re a board member or part of a leadership team, the real question is: Are we building the sustainable conditions for AI to deliver value over time?”

Dinika offers the sharpest framing: “The future of enterprise AI won’t be defined by which trends survive, but by whether institutions are willing to build systems they can actually be accountable for.”

Ranjan adds a practical note: “Most organisations need the same message repeated in five different formats, over six months, by three different leaders, before it becomes true. Leaders over-rotated on tooling, under-rotated on narrative repetition.”

The path forward in 2026 isn’t about better models or more ambitious roadmaps. It’s about confronting what 2025 revealed: that AI adoption is a behaviour change problem, that vertical transformation must precede horizontal platformisation, that data governance is foundational, that organisational readiness matters more than model capability, and that scale without accountability isn’t progress—it’s risk accumulation.

Prabhakaran’s prediction may prove the most telling: “As budgets tighten and boards demand outcomes, experimentation without transformation will lose patience. Enterprises will return to fundamentals.”

The question for 2026 isn’t what new capabilities will emerge. It’s whether enterprises will apply the lessons of 2025—or repeat the same missteps with newer tools.

Anushka Pandit
Anushka Pandit
Anushka is a Principal Correspondent at AI and Data Insider, with a knack for studying what's impacting the world and presenting it in the most compelling packaging to the audience. She merges her background in Computer Science with her expertise in media communications to shape tech journalism of contemporary times.

Related

Unpack More