AMD unveils Instinct MI350 accelerators and its high-performance AI roadmap

Last update: 13/06/2025

  • AMD's new Instinct MI350 accelerators deliver up to 35x faster inference performance and significantly improve power efficiency.
  • Rack-scale AI infrastructure with MI350 and EPYC processors is already being deployed on hyperscale clouds like Oracle Cloud Infrastructure.
  • Software breakthrough: ROCm 7 optimizes AI development and is now available alongside the global AMD Developer Cloud platform.
  • Collaborations with Meta, OpenAI, Microsoft, and other leading companies strengthen AMD's leadership in the open AI ecosystem.
AMD Instinct MI350-2 accelerators

AMD has introduced its new Instinct MI350 accelerators, aiming to mark a before and after in the sector of generative artificial intelligence and advanced computing. The company, during the Advancing AI 2025 event, made clear its goal of establishing itself as a benchmark in performance, efficiency, and scalability for the most demanding AI applications. The strategy, based on open technologies and standards, It also seeks to facilitate the integration of hardware and software through collaboration with various industrial leaders..

With these releases, AMD aims to be a key player in creating open and robust AI ecosystems, capable of responding to the exponential growth of next-generation language models and algorithms. The challenge is to combine high-level accelerators, powerful processors, and an optimized software stack, promoting the democratization of artificial intelligence solutions both for large companies and independent developers.

Exclusive content - Click Here  Grammarly changes its name: It's now called Superhuman and introduces its assistant Go

The Instinct MI350 arrives: a leap in performance and efficiency

AMD Instinct MI350-0 accelerators

The new Instinct MI350 series, consisting of the GPU MI350X and MI355X, promises to quadruple computing power in artificial intelligence tasks compared to the previous generation. When it comes to AI inferences, the leap is even more significant, reaching up to 35 times the previous performance. The MI355X model also stands out in terms of quality-price ratio, allowing to obtain up to 40% more tokens for every dollar invested compared to competitors.

To meet the needs of the most complex workloads, The Instinct MI350 integrates 288 GB HBM3E memory (supplied by Micron and Samsung) and offer a bandwidth of up to 8 TB/sBoth air and liquid cooling options are available, allowing for the installation of up to 64 GPUs in a traditional rack or double that in direct liquid cooling configurations. Performance figures reach up to 2,6 exaFLOPS in FP4/FP6 operations.

Comprehensive infrastructure and scalability: the "Helios" proposal

AMD Instinct MI350 AI Rack

One of the main focuses is the open rack-scale infrastructure, Already running on large clouds like Oracle Cloud Infrastructure. This solution, which will be available in the second half of 2025, combines Instinct MI350 accelerators with fifth-generation AMD EPYC processors and Pensando Pollara network cards.

Exclusive content - Click Here  How to remove the battery from an Asus Vivobook?

Looking ahead, AMD previewed "Helios," its next-generation AI racks, which will integrate Instinct MI400 GPUs, EPYC "Venice" processors with Zen 6 architecture, and Pensando "Vulcan" network cards. The performance jump when running AI models is expected to be Mixture of Experts could be up to 10 times over the current generation.

And in the software section, AMD launches ROCm 7, a revamped version designed to address the challenges of generative AI and high-performance computing. This update includes improvements to support for standard frameworks, new APIs, drivers and tools, expanding options for developers.

Furthermore, the platform AMD Developer Cloud is now available globally, offering a managed environment for agile AI project development and access to advanced resources.

Boosting energy efficiency and sustainability

AMD Instinct MI350 Energy Efficiency AI

One aspect that AMD has highlighted is the energy optimization. The MI350 accelerators have far exceeded internal goals, achieving energy efficiency improvements of up to 38 times over a five-year period. The company also aims to increase rack-scale energy efficiency by 2030 by a factor of 20 compared to 2024, which would make it easier to train AI models that currently require hundreds of racks on just one. reducing electricity consumption by 95%.

Exclusive content - Click Here  Cars of the future 2020

Strategic alliances are a pillar for AMD, with companies such as Meta, OpenAI, Microsoft, Oracle, Cohere, Red Hat, HUMAIN, Astera Labs, Marvell and xAI showing great confidence in their technology. Meta already uses the MI300X series in inference models like Llama 3 and 4; OpenAI is working closely with AMD to integrate hardware and software into its AI infrastructure; and Microsoft is already running production models on Azure with the Instinct platform.

Oracle infrastructures , on the other hand, plans to deploy up to 131.072 MI355X GPUs to scale your zettascale clusters, strengthening the partner ecosystem that drives the adoption and development of AI solutions.

AMD's vision is not only focused on speed and power, but also on sustainability, technological openness, and building strong partnerships to accelerate the advancement of artificial intelligence globally.