Inside IBM’s 11 Billion Dollar Bet: What the Confluent Deal Reveals About AI’s Investment Paradox

IBM’s 11 billion dollar move on Confluent shows that while headlines chase models and GPUs, the real cash is flowing into the data plumbing that makes AI actually work in production.

Share

IBM’s 11‑billion‑dollar agreement to acquire Confluent is one of the clearest signals yet that, in the AI economy, lasting value is consolidating around data infrastructure rather than the model layer. At the same time, many enterprises continue to over‑fund visible AI experiments while under‑investing in the “boring” foundations that actually decide whether any of that spend pays back.

The Deal that Says the Quiet Part Out Loud

In December 2025, IBM announced a definitive agreement to acquire Confluent for about 11 billion dollars, paying 31 dollars per share in cash in a deal expected to close around mid‑2026, subject to regulatory and other customary approvals. IBM’s stated aim is to create a “smart data platform for enterprise generative AI”, explicitly linking the company’s AI ambitions to a real‑time data backbone rather than just more models or GPU capacity. On the numbers alone, it is a striking bet: a double‑digit‑billion outlay for a data‑in‑motion platform that sits beneath the application layer, not a splashy generative AI brand.

The transaction comes against a backdrop of rising AI spend but stubbornly elusive returns. Surveys of large enterprises show AI budgets growing sharply, yet only a minority report material, repeatable ROI, with leaders citing data integration, governance and scaling into production as the main bottlenecks rather than model capability. Read that way, IBM’s move looks less like an outlier and more like a leading indicator of where cash‑rich buyers believe the true economic choke‑points in AI now sit.

Durable Value Lives Below the Model Layer

To understand where durable value in the stack is actually accruing, AI & Data Insider spoke with Peter Pugh‑Jones, a real‑time data and streaming leader at Confluent. He did not comment on the IBM–Confluent transaction itself, instead focusing on general patterns he sees across large enterprises. In his view, the most enduring value in AI does not sit where most of the noise is.

“The most durable value in the AI stack does not sit in the model layer; it sits in the data and operational infrastructure layer that enables AI to function reliably at scale,” he explains. That value, he argues, concentrates in three places: first, the ability to keep data in motion – to stream, enrich, integrate and contextualise events continuously as they happen, rather than in overnight batches. Second, governance and compliance frameworks that keep pace with evolving AI and data regulations. Third, an operating model that connects AI outputs directly to business processes, so that models augment and automate real work instead of living in isolated dashboards.

This triad – data in motion, governance, and operating model – is where platforms like Confluent, modern observability stacks and workflow automation tools intersect. It is also precisely the layer that large acquirers and infrastructure players are now racing to consolidate, even as much of the public discourse still revolves around foundation model benchmarks and frontier research.

IBM’s CEO Arvind Krishna has been arguing that AI is not a hype bubble but an integration and maturity problem; buying Confluent is him literally putting money behind the idea that the constraint is data movement, not model capability.

The Investment Paradox Inside the Enterprise

Pugh‑Jones sees a stark mismatch between where enterprises spend on AI and where value actually materialises. “I see a clear pattern of enterprises over‑investing in the visible AI layer like models, copilots, proofs‑of‑concept because that’s what gets leadership attention,” he says. “But consequently, they’re largely under‑investing in the less glamorous foundations that actually determine ROI – real‑time data movement, integration, governance, and operational reliability.”

This gap is most obvious, he notes, in sectors like banking and financial services, telecoms and government. In those environments, ambitions around fraud prevention, citizen services or customer experience are high, but success hinges on “trusted, fresh, policy‑compliant data at scale” and a clear roadmap with tactical milestones that can be used to measure AI effectiveness. The paradox is that slideware often celebrates prototypes and pilots, yet the value curve only bends once organisations fix the invisible plumbing that gets the right data into the right decision at the right time.

External research reinforces this picture. Analysts consistently find that integration with core systems, data quality, and risk/governance concerns are the top reasons AI projects stall or under‑deliver, rather than model performance itself. In other words, organisations are not suffering from a shortage of intelligence; they are suffering from a shortage of reliable, governable context in motion.

How CFOs should Redraw the AI Buckets

For finance leaders, the IBM–Confluent agreement is a useful lens to rethink AI spending even without taking a view on the deal itself. On one reading, an 11‑billion‑dollar price tag for a data‑streaming backbone sends a clear message: certain data and integration capabilities are being treated as strategic, long‑lived assets rather than discretionary IT projects. That contrasts with many AI line items that are still experimental, tactical or easily swapped out.

Pugh‑Jones offers a concise way to frame this for CFOs: “models are variable, but data capabilities are durable.” In his view, the strategic‑asset bucket should include anything that increases the enterprise’s ability to create repeatable, valuable data assets on top of an event‑streaming backbone. That means spend on governed data integration, data quality and observability, security and lineage, and the operating models that allow those capabilities to be reused across multiple business units and jurisdictions. Once in place, “these are the rails that let multiple AI use cases scale safely” rather than one‑off experiments that never cohere into a platform.

By contrast, the experimental or disposable bucket should capture one‑off prototypes, niche model fine‑tunes with no path to production and “AI features” that cannot be measured against hard metrics such as cost‑to‑serve, fraud loss reduction, cycle‑time compression or revenue uplift. The discipline is not about starving innovation, but about recognising that the half‑life of a specific model or feature is much shorter than the half‑life of a well‑designed data‑in‑motion and governance backbone. For boards and audit committees worried about AI’s risk profile, that distinction also matters: resilient, observable data capabilities are easier to regulate and assure than a sprawl of untracked experiments.

When AI Only Works Once the Streaming Arrives

The difference between those two buckets is easiest to see in concrete projects. One pattern Pugh‑Jones has observed repeatedly is in retail banking and digital payments, where teams build impressive AI prototypes for fraud or risk scoring that perform well in demos but struggle to generate meaningful financial impact. The missing ingredient is usually not model sophistication, but the timeliness and coherence of the signals feeding those models.

“Decisions were fed by delayed, inconsistent signals from multiple systems,” he notes of a common scenario, “so the impact stayed limited.” Once those organisations invested in an event‑streaming backbone and began continuously analysing transactions, login events, device fingerprints and customer interactions – with stream processing and consistent governance layered on top – the AI shifted from “post‑incident analysis” to real‑time, context‑aware prevention. That is the moment when value becomes visible: fraud caught before losses crystallise, charge‑backs reduced, and risk controls tightened without adding friction to legitimate customers.

This pattern generalises beyond fraud. In any domain where customer journeys, payments, onboarding and services are expected to be instant and omnichannel, AI systems cannot rely on stale, partial data. Streaming architectures turn those journeys into live event flows that models can act on, and that operations teams can monitor and tune. The investment paradox, then, is that relatively unglamorous spending on streaming and governance is often what transforms an AI “innovation initiative” into a P&L‑relevant capability.

How Prototypes Really Die: The Batch Bottleneck

Pugh‑Jones is quick to point out that most AI prototypes do not fail in any dramatic way; they “simply become too expensive and too risky to operationalise” because the underlying data architecture never catches up. “The most common failure mode I see in enterprises is an AI prototype that looks excellent in a lab but stalls in production because the underlying data architecture remains batch‑based, siloed, and ownership‑fragmented,” he says.

Without fresh, governed, complete context, model accuracy drops, monitoring is weak and business stakeholders gradually lose confidence. This shows up in telecom customer experience programmes, government digital services and energy or industrial operations where decisions are highly time‑sensitive and data sources are diverse. In each case, the AI work itself may be competent, but the organisation is unwilling to accept the operational, regulatory and reputational risk of deploying it on top of brittle, legacy data pipelines. The result is a graveyard of impressive proofs‑of‑concept that never clear the hurdle into business‑critical workflows.

Research into AI ROI describes a similar curve: the majority of value is captured not by the first wave of experiments, but by the smaller subset of use cases that can be embedded into core processes with adequate data quality, observability and governance. That is precisely why infrastructure‑centric deals and long‑horizon capex into data platforms are gaining prominence, even as individual AI tools come and go.

Following the Money, Not Just the Models

Viewed through this lens, IBM’s planned acquisition of Confluent is more than a headline‑grabbing transaction; it is a datapoint in a broader re‑pricing of where value in AI actually resides. While the market conversation still skews towards GPUs, model families and agent demos, some of the largest cheques in the industry are being written for the infrastructure that moves, governs and operationalises data in real time.

Pugh‑Jones’s perspective does not pass judgement on any particular deal. Instead, it offers a framework for leaders trying to navigate AI’s investment paradox: treat models and features as variable, high‑velocity instruments, and treat data‑in‑motion, governance and operating models as the durable spine. Enterprises that follow the noise will keep over‑indexing on the former. Those that follow the money – and the failures – are starting to quietly double‑down on the latter.

ALSO READ: Why AI Governance Is Becoming a Board-Level Issue for Multinationals

Anushka Pandit
Anushka Pandit
Anushka is a Principal Correspondent at AI and Data Insider, with a knack for studying what's impacting the world and presenting it in the most compelling packaging to the audience. She merges her background in Computer Science with her expertise in media communications to shape tech journalism of contemporary times.

Related

Unpack More