For a few years, “AI race” headlines have treated artificial intelligence like a zero‑sum sport: whoever ships the biggest model first wins. More recently, the language has shifted. Governments, regulators, and even cloud providers are talking less about raw speed and more about “AI sovereignty” and “strategic autonomy”.
The instinct is understandable. When your economy, education system, and critical infrastructure begin to depend on AI, the idea that those systems are built entirely on foreign chips, foreign clouds, and foreign models starts to feel uncomfortable. But the latest Stanford AI Index suggests that most countries—and most enterprises—are never going to own the full AI stack end‑to‑end. Instead, sovereignty in the AI era looks a lot more like managing dependencies across chips, compute, data, and models, rather than trying to walk away from them.
A New Organising Principle for AI Policy
Step back from the benchmarks and you see the macro pattern the AI Index editors are worried about: AI is scaling faster than the systems around it can adapt. In their opening note, the report’s co‑chairs, Yolanda Gil and Raymond Perrault, capture the core tension.
“As AI continues to advance rapidly, the question becomes whether the systems built around it can keep up,” they write. “That gap—between what AI can do and how prepared we are to manage it—runs through every chapter of this year’s report.”
That gap is not just technical. It is institutional and geopolitical. Gil and Perrault point out that AI is now embedded “deeper into classrooms, clinics, and legislatures—and reshapes how people work, learn, and govern” while the “cost of incomplete data continues to rise”. The AI Index, they argue, is designed to supply more neutral, independent measurement into a field where most metrics are produced by companies with a commercial stake in the outcome.
Policy is where the divergence is sharpest. Governments moved on AI in 2025, but not in a co‑ordinated way. In the introduction, Gil and Perrault note:
“Governments around the world acted on AI in 2025, but not in the same direction. The EU AI Act’s first prohibitions took effect, while the United States shifted toward deregulation. Japan, South Korea, and Italy each passed national AI laws, and more than half of newly adopted national AI strategies came from developing countries entering the policy landscape for the first time. AI sovereignty emerged as a central organizing principle across all of these efforts.”
This is the context in which AI sovereignty really matters: not as a buzzword, but as the frame through which both advanced and developing economies are trying to claw back some leverage over a stack they largely do not control.
AI Sovereignty: Defining Policy in an Uneven Field
The report’s Top Takeaways make explicit how central sovereignty has become. In a short but telling paragraph, the authors write:
“AI sovereignty is becoming a defining feature of national policy, but capabilities remain uneven, even as open-source development helps to redistribute who participates. National AI strategies are expanding, particularly among developing economies, and state-backed investments in AI supercomputing are rising in parallel—a sign of growing ambitions for domestic control over AI ecosystems. Yet model production remains concentrated in the U.S. and China. Open-source development is starting to redistribute participation, with contributions from the rest of the world now outpacing Europe and approaching the United States on GitHub, fueling more linguistically diverse models and benchmarks.”
There are several threads here for policymakers and enterprise leaders:
- Sovereignty has shifted from a niche talking point to “a defining feature of national policy”.
- Ambitions are rising fastest in developing economies, which are adopting national AI strategies and building state‑backed compute, often from a position of deep dependence.
- Despite the rhetoric, actual model production and frontier capability remain concentrated in two countries.
In that light, “going it alone” is a fantasy for almost everyone. The practical question is: which dependencies are you comfortable with, and where do you need to reduce or diversify them?
ALSO READ: NVIDIA’s VP of Solutions Architecture on What It Actually Takes to Build a Sovereign AI Factory
The Uncomfortable Concentration of Chips, Compute, and Models
Once you follow the money and the metal, the sovereignty challenge looks less like a values debate and more like a set of very concrete dependencies.
On compute and chips, the report highlights a stark reality:
“The resources powering AI development continued to grow in 2025, but fewer notable models were released than the year before, and the systems at the frontier are increasingly concentrated among a small number of organizations,” the authors write in the Research and Development overview.
“The computing power behind these models has grown roughly 3.3 times per year since 2022, yet almost all of it flows through a single chip foundry in Taiwan, making the global hardware supply chain fragile.”
That fragility is mirrored in data‑centre geography. In a chapter highlight, the authors observe:
“The United States leads in AI data centers, and one Taiwanese foundry fabricates the majority of chips inside them. The United States hosts 5,427 data centers, more than ten times any other country, consuming more energy than any other region. A single company, TSMC, fabricates almost every leading AI chip and makes the global AI hardware supply chain dependent on one foundry in Taiwan, though a TSMC-U.S. expansion began to operate in 2025.”
A later section on infrastructure beyond GPUs is even more explicit about that dependence:
“TSMC is a single point of dependency in the global AI supply chain, as it fabricates virtually every leading AI chip, including Nvidia’s Blackwell GPUs and AMD’s MI300X.”
On the model side, the pattern rhymes. Over 90% of notable frontier models now come from industry, and “the systems at the frontier are increasingly concentrated among a small number of organizations.” The US and China trade the top spot on leader boards, even as a few other countries—South Korea, UAE, Switzerland, Singapore—begin to punch above their weight in patents or talent density.
ALSO READ: Middle East: The Sovereign AI Testbed US, EU and Asia Can Learn From
For any government invoking AI sovereignty, these are the baseline facts. You can fund national AI supercomputers, pass AI acts, and launch “sovereign” models. But if almost every high‑end chip in your racks still comes from one Taiwanese foundry, most of your large‑scale training runs rely on US clouds, and the most capable closed models are controlled by a handful of foreign firms, you are not autonomous in any meaningful sense.
Sovereignty as Strategic Interdependence
That is why an emerging body of policy work—much of it also coming out of Stanford, the OECD, and European institutions—frames sovereignty less as isolation and more as strategic interdependence.
In practical terms, that means thinking in layers:
- Hardware Layer: You may never have your own TSMC, but you can diversify chip suppliers, co‑invest in regional fabrication capacity, and ensure a share of production is co‑located in your jurisdiction (as TSMC’s US expansion begins to do).
- Compute Layer: You can reduce one‑cloud dependence by running critical workloads on regional sovereign clouds or national AI factories, while still tapping hyperscalers for burst capacity and cutting‑edge tooling.
- Model Layer: You can sponsor local models in key languages and sectors, negotiate more balanced access deals with global providers, and ensure your regulators can evaluate and stress‑test the systems you deploy.
- Data And Governance Layer: You can define where data is stored, which laws apply to training and inference, and how cross‑border transfers and access requests are handled.
Sovereignty in this sense is less about “doing it alone” and more about being able to choose and reconfigure dependencies when conditions change—whether those conditions are geopolitical shocks, supply‑chain disruptions, or new regulation.
For enterprises, this layered view of sovereignty maps directly onto architecture decisions: which models to standardise on, where to host them, which clouds and chips to rely on, and how to make sure legal and regulatory exposure is understood at each layer.
ALSO READ: 6 Enterprise Tests to Expose Hidden AI Compliance Risks Across Borders
Open Source as a Sovereignty Tool, Not a Side Story
One of the more optimistic notes in the AI Index is the way open‑source ecosystems are beginning to rebalance participation.
“Open-source AI development continues to scale, with 5.6 million projects on GitHub and Hugging Face uploads tripling since 2023. U.S.-based projects still attract the most engagement, with 30 million cumulative GitHub stars across projects that have crossed the 10-star threshold.” the authors write in the open‑source section.
The US still hosts the largest share of high‑engagement repos, but that dominance is steadily eroding:
“Among projects with at least 10 stars, the United States accounted for the largest share in 2025 (31.7%), though that has declined steadily from nearly 80% in 2011 as developers in other regions have increased their presence on the platform.”
This quantitative shift sits directly underneath that sovereignty takeaway: “Open-source development is starting to redistribute participation, with contributions from the rest of the world now outpacing Europe and approaching the United States on GitHub, fueling more linguistically diverse models and benchmarks.”
For countries that lack the capital base to build hyperscalers or the industrial policy machinery to attract a TSMC fab, open‑source AI is becoming a core sovereignty lever:
- It allows local researchers and start‑ups to tune and deploy models that reflect their own languages, regulatory norms, and risk appetites.
- It reduces lock‑in to any single foreign provider by building domestic capability to run and adapt open‑weight systems.
- It supports regional coalitions—across the Arab world, Latin America, Africa, South‑East Asia—around shared language resources and safety standards.
Again, this is not sovereignty as independence from the global system. It is sovereignty as more equal participation within it.
ALSO READ: Why Data Reliability Now Governs Scaling GenAI
From Sovereignty Slogans to Sovereignty Dashboards
The AI Index is not a policy manifesto, but it does have an implicit message for both governments and enterprises: stop treating sovereignty as a one‑off statement of intent and start treating it as a measurable property of your AI stack.
Gil and Perrault emphasise that “the report equips policymakers, researchers, executives, journalists, and the public with the necessary evidence to make informed decisions about AI”. They warn that “the cost of incomplete data continues to rise” as AI moves deeper into core institutions, and they underline that “what we cannot yet measure matters just as much as what we can”.
“The data does not point in a single direction. It reveals a field that is scaling faster than the systems around it can adapt,” they conclude.
For our readers, the next frontier is to turn those measurements into something like sovereignty dashboards: live, layer‑by‑layer views of chip dependence, cloud exposure, model diversity, open‑source participation, and regulatory alignment.
Because in a world where almost everyone still depends on the same handful of fabs, clouds, and model labs, AI sovereignty will not be about going it alone. It will be about knowing exactly where you are entangled—and having real options when you decide it is time to change those dependencies.
ALSO READ: Inside IBM’s 11 Billion Dollar Bet: What the Confluent Deal Reveals About AI’s Investment Paradox