- AMD's new Instinct MI350 accelerators deliver up to 35x faster inference performance and significantly improve power efficiency.
- Rack-scale AI infrastructure with MI350 and EPYC processors is already being deployed on hyperscale clouds like Oracle Cloud Infrastructure.
- Software breakthrough: ROCm 7 optimizes AI development and is now available alongside the global AMD Developer Cloud platform.
- Collaborations with Meta, OpenAI, Microsoft, and other leading companies strengthen AMD's leadership in the open AI ecosystem.
AMD has introduced its new Instinct MI350 accelerators, aiming to mark a before and after in the sector of generative artificial intelligence and advanced computing. The company, during the Advancing AI 2025 event, made clear its goal of establishing itself as a benchmark in performance, efficiency, and scalability for the most demanding AI applications. The strategy, based on open technologies and standards, It also seeks to facilitate the integration of hardware and software through collaboration with various industrial leaders..
With these releases, AMD aims to be a key player in creating open and robust AI ecosystems, capable of responding to the exponential growth of next-generation language models and algorithms. The challenge is to combine high-level accelerators, powerful processors, and an optimized software stack, promoting the democratization of artificial intelligence solutions both for large companies and independent developers.
The Instinct MI350 arrives: a leap in performance and efficiency

The new Instinct MI350 series, consisting of the GPU MI350X and MI355X, promises to quadruple computing power in artificial intelligence tasks compared to the previous generation. When it comes to AI inferences, the leap is even more significant, reaching up to 35 times the previous performance. The MI355X model also stands out in terms of quality-price ratio, allowing to obtain up to 40% more tokens for every dollar invested compared to competitors.
To meet the needs of the most complex workloads, The Instinct MI350 integrates 288 GB HBM3E memory (supplied by Micron and Samsung) and offer a bandwidth of up to 8 TB/sBoth air and liquid cooling options are available, allowing for the installation of up to 64 GPUs in a traditional rack or double that in direct liquid cooling configurations. Performance figures reach up to 2,6 exaFLOPS in FP4/FP6 operations.
Comprehensive infrastructure and scalability: the "Helios" proposal

One of the main focuses is the open rack-scale infrastructure, Already running on large clouds like Oracle Cloud Infrastructure. This solution, which will be available in the second half of 2025, combines Instinct MI350 accelerators with fifth-generation AMD EPYC processors and Pensando Pollara network cards.
Looking ahead, AMD previewed "Helios," its next-generation AI racks, which will integrate Instinct MI400 GPUs, EPYC "Venice" processors with Zen 6 architecture, and Pensando "Vulcan" network cards. The performance jump when running AI models is expected to be Mixture of Experts could be up to 10 times over the current generation.
And in the software section, AMD launches ROCm 7, a revamped version designed to address the challenges of generative AI and high-performance computing. This update includes improvements to support for standard frameworks, new APIs, drivers and tools, expanding options for developers.
Furthermore, the platform AMD Developer Cloud is now available globally, offering a managed environment for agile AI project development and access to advanced resources.
Boosting energy efficiency and sustainability
One aspect that AMD has highlighted is the energy optimization. The MI350 accelerators have far exceeded internal goals, achieving energy efficiency improvements of up to 38 times over a five-year period. The company also aims to increase rack-scale energy efficiency by 2030 by a factor of 20 compared to 2024, which would make it easier to train AI models that currently require hundreds of racks on just one. reducing electricity consumption by 95%.
Strategic alliances are a pillar for AMD, with companies such as Meta, OpenAI, Microsoft, Oracle, Cohere, Red Hat, HUMAIN, Astera Labs, Marvell and xAI showing great confidence in their technology. Meta already uses the MI300X series in inference models like Llama 3 and 4; OpenAI is working closely with AMD to integrate hardware and software into its AI infrastructure; and Microsoft is already running production models on Azure with the Instinct platform.
Oracle infrastructures , on the other hand, plans to deploy up to 131.072 MI355X GPUs to scale your zettascale clusters, strengthening the partner ecosystem that drives the adoption and development of AI solutions.
AMD's vision is not only focused on speed and power, but also on sustainability, technological openness, and building strong partnerships to accelerate the advancement of artificial intelligence globally.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.
