OpenAI has reportedly approached Google Cloud, one of its biggest rivals in the AI arms race, to support the training of its ChatGPT model. The deal came despite fierce competition from Google's Gemini large language model. According to three sources, the deal was finalized in May after repeated negotiations since the beginning of the year. This move reflects that as the scale of models expands rapidly, AI companies are shifting from relying exclusively on a single cloud service to a diversified computing resource layout.
Background: Azure's exclusive end, Stargate layout accelerated
Since 2023, OpenAI has long relied on Microsoft Azure as its main computing power provider, but its exclusive agreement ended in January 2025, and Azure only retained the "right of first refusal" rather than the exclusive model. Subsequently, OpenAI signed five-year contracts including Oracle and SoftBank's $500 billion Stargate project and CoreWeave's total of $11.9 billion to enhance computing power and computing power diversity.
Against this backdrop, turning to Google Cloud has become OpenAI's latest and most important chess piece. With the help of Google's TPU chip resources, OpenAI can not only relieve Azure's resource pressure, but also improve computing power redundancy and supply chain stability.
Game between the two parties: the new logic of cloud computing power and AI ecology
For OpenAI, this cooperation undoubtedly enhances its long-term scalability and provides underlying guarantees for the upcoming new generation of large models (such as GPT‑5). According to Reddit netizens, "This deal must be in preparation for GPT‑5, otherwise they simply do not have enough computing power."
From Google's perspective, strengthening its Google Cloud business can not only increase revenue, but also make full use of its own TPU inventory and improve cloud utilization. In the face of severe competition from Amazon AWS and Microsoft Azure, Google also hopes to strengthen its market positioning in the "AI-ready cloud". Selecting OpenAI is undoubtedly a major victory in enhancing industry influence.
It is worth noting that although the agreement has released Google's computing power reserves, it also means that its own resources may be further strained. According to Google's Chief Financial Officer Anat Ashkenazi in April this year, the company is already facing the pressure of insufficient supply of resources.
The trend of multi-source AI computing power: towards a new era of ecological win-win
This transaction is not an isolated case. In recent years, AI companies are gradually moving towards a multi-point layout in computing power supply to avoid being restricted by a single supplier and to enhance bargaining chips. For example:
The Stargate project claims to invest $500 billion and joins forces with SoftBank, Oracle and others to build a multinational AI infrastructure;
The CoreWeave cooperation agreement helps OpenAI quickly expand GPU resources;
The independent research and development chip plan is committed to reducing dependence on third-party manufacturers such as Nvidia and SMIC.
This trend foreshadows the deep evolution of AI industry infrastructure: core basic capabilities are increasingly becoming a "rigid need" for the industry. Technology giants are no longer just "cloud service providers", but are competing hardcore at the level of computing power, chips, data centers and even national strategies.
The supply of AI computing power is moving towards a new era of "multi-supplier integration", provoking the industry ecology, corporate games and capital structure. With the rapid growth of ChatGPT users and model scale, computing power has become the lifeblood of AI implementation. In the future, perhaps whoever can control the underlying infrastructure will be more likely to occupy the commanding heights in the next round of AI wave.