Amazon is in talks to invest $10 billion in OpenAI, a report from Reuters stated.
The report, which cited sources familiar with the matter, revealed that the negotiations between the two companies are “very fluid” at the moment.
The deal could value OpenAI at more than $500 billion. Another report from Bloomberg stated that OpenAI could adopt Amazon Web Services’ (AWS) Trainium AI chips as a part of this deal.
The Trainium chips are custom AI accelerators built to reduce the cost and energy of training and deploying large models.
At the re:Invent 2025 event, the company announced the general availability of EC2 Trn3 UltraServers, powered by the 3nm Trainium3 chip. Each UltraServer can scale to 144 chips, providing up to 362 FP8 petaflops of compute.
AWS says Trainium3 delivers up to 4.4 times more compute, four times better energy efficiency, and nearly four times higher memory bandwidth than Trainium2.
Trainium3 follows AWS’s deployment of five lakh Trainium2 chips in Project Rainier with Anthropic, described as the world’s largest AI compute cluster.
AWS also previewed Trainium4, expected to deliver at least six times higher FP4 performance, with further gains in FP8 performance and memory bandwidth.
In November, AWS and OpenAI announced a multi-year partnership worth $38 billion to run and scale OpenAI’s core AI workloads on AWS infrastructure.
Under that agreement, OpenAI will begin using AWS compute immediately, with all capacity targeted for deployment before the end of 2026 and additional expansion planned through 2027 and beyond.
OpenAI’s use of AWS Trainium chips could reduce its reliance on NVIDIA-based systems.
Trainium, alongside Google’s custom AI accelerators, TPUs, is positioned as the most credible alternative to NVIDIA’s GPUs, which have long dominated the AI hardware ecosystem.
Google has disclosed that it trained and deployed its Gemini family of models, including Gemini 3 Pro, on TPU clusters.
Google is also supplying TPUs to Anthropic, which has announced plans to deploy up to one million TPU systems to support workloads for its Claude models.
Amazon, which is also one of Anthropic’s major investors, has enabled several of its Claude models to be trained on Trainium hardware.
ALSO READ: Broadcom Reveals $21 Billion Google TPUs Order from Anthropic