Amazon has infused a fresh $5 billion investment in Anthropic, with up to $20 billion more to follow, in a deal that deepens ties between two of the most powerful players in the artificial intelligence ecosystem.
The announcement is part of a sweeping strategic partnership under which Anthropic has committed to spending more than $100 billion over the next ten years on Amazon Web Services (AWS) technologies and chips.
Amazon has confirmed that it remains a minority investor and holds no seat on Anthropic’s board or its trust. The scale of future investments will be tied to what Amazon described as “certain commercial milestones.”
Andy Jassy, Chief Executive of Amazon, highlighted the commercial momentum behind Amazon’s custom chip offerings.
“Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand. Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI,” he said in a statement.
Dario Amodei, CEO and Co-founder of Anthropic, underscored the growing demand for Claude and the necessity of the infrastructure expansion: “Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand. Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS.”
Amazon shares rose approximately 3% in after-hours trading following the development.
Claude Platform Comes to AWS
The deal introduces a significant new product integration. AWS customers will be able to access the full Anthropic-native Claude console from within AWS, allowing them to use the same AWS access controls and monitoring already in place, with no additional credentials, contracts, or billing relationships to manage.
Claude remains the only frontier AI model available to customers on all three of the world’s largest cloud platforms: AWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).
Partnership Built on Scale
At the heart of the deal is a vast infrastructure commitment. Anthropic has secured up to 5 gigawatts of capacity for training and deploying its Claude AI models, including new Trainium2 capacity coming online in the first half of this year, and nearly 1GW of Trainium2 and Trainium3 capacity expected to come online by the end of 2026.
The agreement also extends well into the future of Amazon’s custom silicon roadmap. The commitment spans Trainium2, Trainium3, Trainium4, and the ability to purchase future generations of Trainium as they become available, as well as tens of millions of Graviton cores, Amazon’s widely adopted CPU chip.
Anthropic continues to choose AWS as its primary training and cloud provider for mission-critical workloads.
Since 2023, the two companies have worked together to accelerate generative AI adoption across industries, making it easier for customers to build, deploy, and scale AI applications. Over 100,000 customers now run Anthropic’s Claude models on AWS, making Claude one of the most popular model families on Amazon Bedrock.
Together, they collaborated on Project Rainier, one of the largest AI compute clusters in the world, and Anthropic is now actively using it to train and deploy Claude models for customers worldwide. Anthropic is currently using over one million Trainium2 chips to train and serve Claude.
Record Revenue and Surging Demand
The deal comes at a moment of extraordinary growth for Anthropic. Enterprise and developer demand for Claude has accelerated sharply in 2026, alongside a sharp rise in consumer usage across the company’s free, Pro, and Max tiers. Anthropic’s run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025.
Growth at this pace has placed an inevitable strain on infrastructure; unprecedented consumer growth has impacted reliability and performance for free, Pro, Max, and Team users, particularly during peak hours. The new compute capacity is expected to alleviate those pressures.
ALSO READ: Middle East: The Sovereign AI Testbed US, EU and Asia Can Learn From