AMD, the US semiconductor giant, held the 2025 Global AI Development Conference early this morning, officially launching the next-generation AI acceleration chip Instinct MI400 series and the performance upgrade version MI350 of its predecessor. It is worth noting that Sam Altman, co-founder and CEO of OpenAI, attended the conference as a special guest and jointly released new products with AMD. AMD revealed that OpenAI continued to provide technical feedback during the MI400 R&D stage to help optimize the GPU architecture to better meet the computing needs of large model training and reasoning.
According to the release information, Instinct MI400 is equipped with up to 432GB of HBM4 memory. Altman was surprised when he learned about this parameter on the spot, which caused heated discussions. Compared with the previous generation of products, MI350 is equipped with 288GB of HBM3E memory and achieves up to 8TB/s of memory bandwidth, which improves performance by 4 times and reasoning capabilities by 35 times in AI training tasks.
In terms of cost performance, AMD pointed out that the MI355X chip has an advantage in energy consumption control, and the number of tokens that can be processed per dollar is about 40% higher than that of competing products. This is seen as an important competitive factor in the context of rising AI inference costs.
The launch of the MI400 series, coupled with in-depth collaboration with OpenAI, reflects AMD's pursuit of differentiated breakthroughs in the fiercely competitive AI hardware market with Nvidia. The industry generally expects that this series of chips will mainly serve ultra-large-scale cloud computing platforms and model training centers, and may be deployed in the data center infrastructure of major AI developers such as OpenAI, Microsoft, and Meta in the future.