Key Takeaways
- The Consistency Trap: LLMs do not behave uniformly across borders; local training data and routing can cause the same model to give conflicting answers in different jurisdictions.
- Hidden Compliance Risks: Standard audits focusing on encryption and access controls often miss “in-model” variations that violate regional data sovereignty or corporate policy.
- Strategic Ownership: Governance can no longer sit solely with IT or legal. Boards must treat AI behaviour as an enterprise risk, requiring continuous testing and clear lines of accountability.
Generative artificial intelligence (GenAI) is now firmly on the corporate agenda. Large organisations are increasingly rolling out language models to speed up internal processes and support customers. For multinationals, however, the challenge is no longer whether to adopt AI, but how to do so without undermining legal obligations and corporate consistency across borders.
The Illusion of Global Consistency
Recent research into large language model behaviour shows that these systems don’t operate uniformly. Outputs depend on where developers train the models, where IT deploys them, and the political or cultural assumptions embedded within them. For global organisations that must operate under a single governance framework while complying with diverse regulatory regimes, this variability introduces a compliance risk that traditional controls cannot manage.
Consistency is a legal requirement, not a branding preference. Regulators expect multinationals to apply policies, safeguards, and ethical standards evenly across regions, even when local laws differ. Language models challenge this expectation. A system that generates compliant responses in one jurisdiction may produce content elsewhere that conflicts with local regulation, corporate policy, or data protection rules. When this happens at scale and without visibility, the risk quickly escalates from operational inconvenience to regulatory exposure.
ALSO READ: Who Is Held Accountable When AI Agents Fail?
When Local Infrastructure Fragments Compliance
Data protection illustrates the problem. Regulations such as GDPR impose strict rules regarding how organisations collect, process, and store personal data. At the same time, many countries now enforce data localisation, geofencing, or sovereignty requirements that shape how providers deliver AI services.
Providers may route requests through different infrastructure, process them under different moderation rules, or allow region-specific training data to influence the output. From the organisation’s perspective, the AI appears to be a single system. In reality, jurisdiction fragments its behaviour.
This fragmentation complicates compliance in subtle ways. A response generated in one market may rely on assumptions that are lawful there but unacceptable elsewhere. Personal data handling may technically meet provider standards while still violating internal policy or regional interpretation of the law. Because these variations happen inside the model rather than in visible workflows, conventional audits that focus on access controls, encryption, and contractual assurances often miss them.
ALSO READ: Big Tech’s Enterprise AI Initiatives in 2025: A Guide by Business Need
The Liability of Distributed Responsibility
The pressure to move fast only amplifies the problem. Boards and executive teams push AI adoption to remain competitive, reduce costs, and avoid falling behind peers. In this environment, governance often lags implementation. Organisations distribute responsibility for AI decisions across IT, legal, procurement, and business units, with no single function owning model behaviour or output risk. When issues arise, organisations struggle to explain how they made decisions or who holds accountability.
This is why AI governance is becoming a board-level concern. Boards cannot treat generative models like traditional enterprise software where they can assume predictable behaviour. These models evolve continuously, change with updates from providers, and respond differently depending on context. Without a governance structure that reflects this reality, compliance teams are left reacting to incidents rather than preventing them.
Governance Requires Active Ownership
Effective governance starts with visibility. Multinationals must understand how models behave across the full range of countries, languages, and regulatory environments in which they operate. This requires ongoing testing of outputs, not just during deployment but throughout the model lifecycle. Auditors must assess legal alignment, bias, accuracy, and tone, specifically focusing on where variations could trigger regulatory or reputational consequences.
Governance frameworks must also define ownership. Clear accountability for model selection, deployment approval, and ongoing oversight reduces the risk of gaps between teams. Legal and compliance functions need a formal role in evaluating AI behaviour, rather than relying solely on vendor assurances. Incident response processes should explicitly include AI-related failures, ensuring teams escalate and address unexpected outputs quickly.
Importantly, companies should embed governance into existing risk management structures rather than treat it as a standalone initiative. Reporting on AI behaviour, audit findings, and compliance risks should sit alongside other enterprise risk indicators, drawing on evidence from ongoing research into how unmanaged AI adoption puts enterprises at risk across borders.
ALSO READ: 22 Major Regulations That Shaped the Year