Samsung Lands HBM4 Deal for AMD AI GPUs

Samsung said its HBM4 delivers speeds of up to 13 Gbps and bandwidth reaching 3.3 Tbps.

Share

Samsung Electronics has signed a new memorandum of understanding with AMD (Advanced Micro Devices) to deepen their partnership on next-generation AI memory and computing, with a focus on supplying cutting-edge HBM4 memory for upcoming AMD AI accelerators.

The agreement, announced on March 18, was formalised at Samsung’s Pyeongtaek semiconductor complex in South Korea, where AMD CEO Lisa Su met Samsung leadership, marking a high-level push to align both companies’ AI infrastructure roadmaps.

Under the deal, Samsung Electronics will serve as a primary supplier of HBM4 memory for AMD’s next-generation AMD Instinct MI455X GPUs, alongside advanced DDR5 memory for 6th Gen AMD EPYC processors (codenamed Venice). These components will power AMD’s broader rack-scale AI system, the AMD Helios platform.

Samsung said its HBM4—built on a 6th-generation 10nm-class DRAM process with a 4nm logic base die—delivers speeds of up to 13 Gbps and bandwidth reaching 3.3 Tbps, positioning it as a key enabler for large-scale AI model training and inference.

The collaboration reflects growing industry pressure to optimise memory bandwidth and energy efficiency as AI workloads scale. AMD’s MI455X GPU is expected to rely heavily on these improvements to compete in the high-performance AI infrastructure market.

“Integration across the full computing stack—from silicon to system to rack—is essential,” Su said, underscoring the need for tighter ecosystem partnerships. 

Her visit to Samsung’s advanced fab signals AMD’s intent to secure long-term supply and co-development advantages amid intensifying AI hardware competition.

The partnership also includes potential foundry collaboration, with Samsung in discussions to manufacture future AMD chips—an area where it is seeking to expand its position against rivals.

Samsung and AMD’s relationship spans nearly two decades, including prior work on HBM3E memory used in AMD’s MI350X and MI355X accelerators.

The partnership comes as Samsung Electronics steps up its push across the AI chip ecosystem. The company is working with NVIDIA on advanced memory and is also linked to supplying components for AI hardware players such as Groq.

At NVIDIA’s GTC 2026 conference, CEO Jensen Huang introduced a new AI inference processor based on Groq’s technology.

“I want to thank Samsung who manufactures the Groq LP30 chip for us, and they’re cranking as ‌hard as they can,” he said, adding the chips were in production and would be shipped in the second half of this year.

ALSO READ: NVIDIA’s 7-Chip Vera Rubin Platform in Full Production

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

Unpack More