Across sectors, enterprises are racing to embed generative AI into customer journeys, finance workflows and operational processes. The language has become familiar: pilots, proofs of concept, minimum viable agents. Yet behind the flurry of experimentation sits a quieter anxiety: leaders do not fully trust the data underneath.
Recent industry surveys consistently rank data quality and availability as top obstacles to AI adoption, ahead of model performance or raw infrastructure cost. Separate AI governance digests highlight that many organisations still lack a formal AI governance policy, or only address AI indirectly through broader risk frameworks, even as deployment accelerates.
This is the paradox of GenAI scaling in 2026. On the one hand, organisations are rapidly increasing their use of AI, including more autonomous, agentic systems. On the other, foundational questions about data reliability, literacy and governance remain unresolved. A global CDO study by Informatica from Salesforce and Deloitte, for example, finds that while most organisations plan to expand AI investments, many are concerned that new pilots are progressing without resolving the data reliability issues uncovered by earlier efforts.
Against this backdrop, two vantage points converge. Informatica, now part of Salesforce, sits inside the data and integration stack, watching how GenAI interacts with master data, lineage and governance. Deloitte sits inside boardrooms and transformation programmes, watching how autonomy reshapes organisational risk and decision‑making. Together, their perspectives show that data reliability has shifted from a behind‑the‑scenes hygiene issue to the governor on GenAI scale.
Redefining Reliability for Autonomous Systems
For traditional analytics, “reliable” data meant something very specific: records that were complete, up to date and internally consistent enough to support dashboards and reports. If there were gaps or disagreements between systems, people debated them in meetings and reconciled them in spreadsheets. The friction was real but containable.
Generative and agentic systems change the definition. As Levent Ergin, Chief Industry Strategist for Agentic AI, Regulatory Compliance & Sustainability at Informatica from Salesforce, puts it, “With GenAI, the concept of data reliability changes from what we’re traditionally used to. It is no longer just about whether data is complete or up to date. It’s about whether the environment feeding the model is stable enough to support autonomous decision-making.”
“Generative AI doesn’t simply retrieve information like a dashboard. It synthesises, predicts and, increasingly, it acts,” he says. “That means traditional definitions of ‘good data’ fall short. If lineage is unclear, the model can’t explain its reasoning. If master data isn’t reconciled, the same customer may appear as three different entities. If context is lost, a revenue figure might mean bookings in one system and billings in another. Reliability, then, is about traceability, consistency and governance.”
From Ergin’s perspective, that shift demands a different lens. “To recontextualise, I therefore encourage data leaders to approach reliability from a business perspective. From this lens, the purpose is simple: reducing operational and regulatory exposure. This gets you asking pertinent questions and approaching reliability in the way that’s needed for AI systems.”
This is where GenAI’s promise and its risk intersect. When systems simply inform humans, reliability failures show up as frustrating arguments about whose report is “right”. When systems act, the same inconsistencies become silent changes to prices, credit limits, supply allocations or recommendations. Data reliability becomes a question of delegated authority.
From Curated Pilots to Entangled Reality
Nowhere is this clearer than in the transition from early pilots to production. In a pilot, Ergin notes, “you’re working in a controlled, almost ideal world. The dataset is curated, oversight is manual and the blast radius is small.” The AI sits next to core systems, not inside them, and issues can be caught by a project team.
“Production is a whole different beast,” he continues. “AI is no longer operating in isolation. It’s embedded in a live enterprise environment with real dependencies.
“The moment GenAI touches ERP, CRM, supply chain or other systems, reliability stops being a data engineering issue and becomes an operational resilience issue. Duplicate entities surface. Definitions clash. Latency creeps in. And if there’s no rollback plan, confidence evaporates quickly.”
That evolution from sandbox to entanglement is where many organisations now find themselves. In pilot mode, capital flowed to visible applications—copilots, chat interfaces, impressive demonstrations. Foundations remained largely invisible. As Deloitte’s Marc Beierschoder, Partner at Deloitte Consulting AG, observes, “For years, unreliable data produced unreliable reports. Humans reconciled the inconsistencies. They debated numbers. They applied judgment. The friction was absorbed in meetings. Now, as GenAI moves from experimentation into execution — and increasingly into agentic systems — unreliable data produces scaled decisions.”
“That changes everything,” he says.
Industry experience backs this up. Benchmarks on enterprise AI adoption show that while most large organisations have at least one AI use case in production, relatively few report high levels of foundational data maturity or integrated governance, and many still cite data quality and availability as leading barriers to scale. As autonomy creeps into workflows, what used to be a reporting nuisance becomes a strategic and operational risk.
ALSO READ: Speed Without Guardrails: The Security Gap Enterprises Are Creating as They Scale AI Agents
Why Organisations Advance on Imperfect Ground
If the foundations are fragile, why do leaders continue to push ahead with GenAI scaling? Beierschoder is clear that this is not naivety so much as calculation.
“Because waiting for perfect foundations is a competitive illusion. Markets are moving. Productivity expectations are rising. Boards are asking for measurable AI returns. Standing still is not neutral — it is decline,” he argues.
“Leaders are not naive about their data gaps. They are making a calculated bet: improve foundations while moving forward. The real mistake is not acceleration. It is blind acceleration.
“What differentiates the leaders from the followers is discipline. Disciplined organizations narrow scope. They attach autonomy to high-value, clearly owned decision domains. They invest in reliability where economic impact is measurable.
“The undisciplined ones deploy copilots broadly and hope governance will catch up. It rarely does.”
This discipline gap shows up clearly in governance. AI governance landscape surveys routinely find that many organisations either lack a formal AI governance policy or have only emerging, fragmented structures. Yet boards and executive teams are simultaneously approving ambitious roadmaps for GenAI and agentic AI.
The result is the pattern Beierschoder sees across transformations: “Failure is rarely catastrophic. It is cumulative. You see slight inconsistencies; conflicting recommendations; rising override rates; managers double-checking outputs; friction creeping into workflows. Trust erodes gradually.
ALSO READ: Inside IBM’s 11 Billion Dollar Bet: What the Confluent Deal Reveals About AI’s Investment Paradox
“Deloitte’s experience across large-scale transformations shows that erosion of trust is the single biggest inhibitor of AI scaling — not model accuracy. Once managers stop trusting the system, they slow it down. They reintroduce approval layers. They hedge decisions. Momentum dies quietly.
“The failure pattern is organizational, not technical. Autonomy without trust does not collapse. It stalls.”
In other words, unreliable foundations do not always cause visible blow‑ups. They more often cause slow‑moving sclerosis, as humans re‑insert themselves into processes the organisation had hoped to streamline.
Designing Reliability Backwards from Decisions
Given this backdrop, one of the most practical questions for leaders is where to start. Many have felt paralysed by the sheer volume and messiness of enterprise data. Here, Ergin urges a reversal of instinct.
“The mistake is trying to ‘clean all the data’ before doing anything meaningful. That’s paralysing. I’d flip the script,” he says.
“Start with the first GenAI use case you genuinely intend to put into production. Then work backwards. Which data domains does it rely on? Are those domains reconciled and validated in UAT? Is lineage mapped? Who signs off before pre-production?
“When you anchor reliability to a specific use case, the problem becomes manageable and commercial. You’re not fixing data in the abstract; you’re protecting revenue, customer trust or regulatory standing. Focus first on high-revenue workflows, customer-facing journeys and regulated domains.
“AI testing isn’t just model testing. You’re validating data integrity, integration discipline and operational continuity. Framed that way, reliability becomes a business priority rather than a technical afterthought.”
Beierschoder sees a parallel shift in how leading organisations organise themselves. “They redesign before they scale,” he says.
“First, they assign ownership at the decision level. Not just platform ownership, but outcome ownership. Second, they treat reliability as continuous engineering. Governance is monitored, stress-tested, and reviewed — not documented and forgotten. Third, they align incentives with long-term system health, not short-term speed. The decisive difference is cultural.
ALSO READ: Why AI Governance Is Becoming a Board-Level Issue for Multinationals
“Some organizations are still experimenting with tools. Others are redesigning how decisions flow through the enterprise. The winners will not be the fastest experimenters, but rather, the fastest redesigners.”
Taken together, these recommendations point to a practical playbook. Rather than talking abstractly about “data reliability” at the enterprise level, leaders can:
- Choose a specific, economically meaningful decision domain where they are willing to assign clear ownership.
- Map the data domains, lineage and integration points that feed that decision.
- Establish sign‑off criteria for reliability before pre‑production, including rollback plans and monitoring.
- Treat each successful implementation as both a value case and a pattern for broader redesign.
This is less glamorous than launching dozens of pilots. But it is much closer to what sustained autonomy requires.
Can GenAI fix the data mess it depends on?
There is a tempting narrative that GenAI itself can help rescue the situation: using AI to clean, validate and govern data at scale. Ergin is cautiously optimistic but blunt about the limits.
“GenAI can absolutely help, but it would be a stretch to think that it can replace governance,” he says.
“It’s very good at spotting anomalies, detecting duplication patterns and monitoring drift across environments. In fact, AI-driven data quality tools are becoming increasingly common, as noted by analysts such as Gartner, who highlight AI’s growing role in augmenting data management practices.
“But here’s the nuance: any AI used to ‘clean’ data is itself dependent on reliable data and structured controls. It can flag inconsistencies, but it cannot assign ownership, design sign-off frameworks or architect rollback strategies.
“This is where human-in-the-loop matters. AI is a force multiplier, not a substitute for discipline. Used correctly, it accelerates validation and monitoring. Trusted blindly, it simply scales instability faster.”
ALSO READ: Intent: The Missing Data Layer in Generative AI
This aligns with the broader market picture. Vendors and analysts increasingly promote AI‑augmented data quality and observability tools, but there is growing recognition that tooling without clear accountability can even make problems less visible by adding layers of abstraction. Without clear lines of responsibility, powerful diagnostic capabilities can generate dashboards that nobody owns.
The implication is not that AI‑augmented data management is optional. On the contrary, as data volumes and system complexity grow, manual approaches will not suffice. But the sequencing matters: automation must sit on top of defined ownership, governance and decision rights, not instead of them.
What Getting Ahead Looks Like
So what does it mean to be out in front on data reliability for GenAI in 2026?
One clue lies in the CDO survey from Informatica from Salesforce and Deloitte. It reports that 90% of organisations are concerned that new AI pilots are progressing without resolving the data reliability issues uncovered by earlier efforts. At the same time, 76% say their company’s visibility and governance has not fully kept pace with employees’ use of AI, 75% say their workforce needs stronger data literacy skills, and 74% say greater AI literacy is required.
Alongside this anxiety sit concrete plans. Just over half of organisations in the same research plan to use vendor‑supplied AI agents, compared to 44% that expect to develop them internally, and on average they anticipate partnering with eight separate vendors to support AI management priorities in 2026. That combination—high concern, high ambition, fragmented tooling—captures the precariousness of the moment.
Beierschoder argues that the necessary conversations need to move decisively upstream. “Boards should stop discussing AI in terms of ambition. They should start discussing it in terms of delegated authority. When systems prioritize customers, allocate capital, influence pricing, or sequence product launches, those are governance decisions — even if they are executed by algorithms.
“The board-level questions are simple but uncomfortable:
Which decisions are we comfortable delegating to machines?
Who owns the objective function?
How frequently is behavioral drift reviewed?
What is our tolerance for scaled error?
“Data reliability should not be framed as technical debt. It should be framed as risk exposure and capital protection. Autonomy scales what you encode. If you encode ambiguity, you scale ambiguity but if you encode clarity, you scale advantage.
“Agentic AI will not fail because the technology is immature. It will fail where leadership structures remain unchanged.”
Ergin, from the solution provider side, comes back to the same point from a different angle. Reliability is no longer a question of whether a given data set is “clean enough” for a quarterly report. It is a question of whether the organisation is willing to let a machine act on its behalf, at scale, using that data.
The familiar phrase for this transition is “moving AI from pilot to production”. But that language now understates the transformation. The more accurate description is moving from isolated experiments on curated data to living systems that participate in everyday decision‑making. In that world, data reliability is not a backstage concern. It is the new front line of AI strategy.
ALSO READ: NVIDIA’s VP of Solutions Architecture on What It Actually Takes to Build a Sovereign AI Factory