Amazon and OpenAI have announced a record-breaking $38 billion, seven-year partnership that will provide OpenAI with access to hundreds of thousands of Nvidia AI chips and Amazon’s vast cloud infrastructure. The collaboration will be hosted on Amazon Web Services (AWS), featuring Nvidia’s latest GB200 and GB300 accelerators. OpenAI is expected to begin using the infrastructure immediately, with full deployment slated for completion by the end of 2026. Following the announcement, Amazon’s stock rose nearly 6%, while Nvidia shares also gained.
The deal marks a significant shift in OpenAI’s cloud strategy. After years of exclusive collaboration with Microsoft, OpenAI is now diversifying its cloud partnerships to ensure operational independence and scalability. For Amazon, the agreement signals a comeback in the competitive AI infrastructure market, positioning AWS as a key player in powering next-generation AI models like ChatGPT.
Amazon’s approach is dual-pronged: it will supply Nvidia GPUs to OpenAI while continuing to develop its own Trainium2 and Inferentia chips, already in use by Anthropic. This allows Amazon to benefit from the ongoing AI hardware boom while preparing to reduce reliance on Nvidia in the future. Analysts note that OpenAI’s projected $1 trillion compute spend by 2030 could both fuel AI innovation and risk sparking an infrastructure investment bubble.
For OpenAI, the deal secures long-term access to the world’s most advanced compute resources—crucial for frontier AI research and model scaling. For Amazon, it’s a defining victory that reaffirms AWS’s dominance in cloud infrastructure. The partnership is set to accelerate enterprise AI adoption, intensify competition among hyperscalers, and reshape the economics of the global AI ecosystem.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



