UAE BriefUAE BriefUAE Brief
Notification Show More
Font ResizerAa
  • Automotive
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Luxury
  • News
  • Sports
  • Technology
  • Travel
Reading: Samsung targets first quarter HBM4 deliveries for AI boom
Share
UAE BriefUAE Brief
Font ResizerAa
Search
  • Automotive
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Luxury
  • News
  • Sports
  • Technology
  • Travel
Follow US
Home » Samsung targets first quarter HBM4 deliveries for AI boom
Technology

Samsung targets first quarter HBM4 deliveries for AI boom

Last updated: February 10, 2026 7:34 am
Published: February 9, 2026
Share
SHARE

SEOUL: Samsung Electronics said it is on track to begin delivering its next-generation HBM4 high-bandwidth memory products in the first quarter of 2026. The company said the HBM4 lineup will include products with 11.7 gigabits-per-second performance, as it expands sales of memory used in artificial intelligence servers and accelerators. Samsung did not name customers for the initial HBM4 deliveries, and it did not disclose shipment volumes in its earnings materials.

Samsung says HBM4 deliveries start in Q1 2026 as 11.7 Gbps memory targets AI servers worldwide.

In its fourth-quarter and full-year 2025 results, Samsung said its memory business posted record highs in quarterly revenue and operating profit, supported by higher sales of high-value products including HBM, server DDR5 and enterprise solid-state drives. The company said limited supply availability remained a factor even as demand for AI computing continued to lift consumption of advanced memory and storage used in data centers.

High-bandwidth memory is a vertically stacked DRAM technology designed to increase data throughput compared with conventional DRAM, and HBM4 is the newest generation following HBM3E. In an investor presentation accompanying the earnings release, Samsung said it plans to start delivering HBM4 “mass products,” including an 11.7 Gbps version, and cited “timely shipment” of HBM4 as part of its near-term outlook for AI-related products.

Nvidia, the largest supplier of AI data center accelerators, has introduced its Rubin platform, which it says uses HBM4 across multiple system configurations. On Nvidia’s product specifications page for the Vera Rubin NVL72 rack-scale system, the company lists 20.7 terabytes of HBM4 with 1,580 terabytes per second of bandwidth, and 288 gigabytes of HBM4 with 22 terabytes per second of bandwidth for a single Rubin GPU, noting the figures are preliminary and subject to change.

Rubin platform memory requirements

Samsung’s results statement also pointed to broader work across advanced semiconductor manufacturing and packaging linked to AI computing. It said its foundry business commenced mass production of first-generation 2-nanometer products and began shipments of 4-nanometer HBM base-die products, components used in the logic layer of high-bandwidth memory stacks. Samsung said it plans to provide optimized solutions through integration of logic, memory and advanced packaging technologies.

Other major memory makers have also published timelines for their HBM4 readiness. SK hynix said in September 2025 that it completed development and finished preparation of HBM4, and that it had readied a mass production system. Micron said in a December 2025 investor presentation that its HBM4, with speeds over 11 Gbps, is on track to ramp with high yields in the second calendar quarter of 2026, consistent with customers’ platform ramp plans.

Competing HBM4 road maps

In describing its own HBM4 program, Micron said its HBM4 uses advanced CMOS and metallization technologies on the base logic die and DRAM dies, designed and manufactured in-house, and pointed to packaging and test capability as critical to performance and power targets. SK hynix has described HBM4 as part of a generational progression in stacked memory built for ultra-high performance AI, where bandwidth and power efficiency are central requirements for data center operation.

Samsung’s earnings materials did not link its HBM4 delivery schedule to any specific AI processor program or customer deployment. Nvidia’s Rubin announcements and published specifications do not identify HBM4 suppliers, and Nvidia has not disclosed vendor allocations for the HBM4 used in Rubin systems. Samsung’s confirmed timeline, as stated in its results release, is that HBM4 deliveries are expected to begin within the first quarter of 2026. – By Content Syndication Services.

TAGGED:111.7 Gbps20.7 TB HBM42nm foundry4nm base die580 TB/sadvanced packagingAI acceleratorsai chipsai serversbase diecontent syndication servicescryptowiredata centersDDR5dramenterprise SSDEuroWirehbm3eHBM4high-bandwidth memorymemory bandwidthmena newswiremicronNvidiaRubin NVL72samsungsamsung electronicssemiconductor supply chainsk hynixSouth Korea techstacked DRAMvendor allocationsVera Rubin
Share This Article
Facebook TwitterEmail Print
Popular News
Sabah fire destroys 1,000 homes and displaces thousands
News

Sabah fire destroys 1,000 homes and displaces thousands

April 20, 2026
UAE president and EU Council chief discuss regional security
UAE president hosts UK PM for regional security talks
Bahrain and UK review regional tensions and economic risks
Abdullah bin Zayed, Kaja Kallas review UAE-EU ties
UAE and Italy leaders discuss security and cooperation
Pakistan rocked by 6.2 quake from Afghanistan’s Hindu Kush
Northern China coal mine roof collapse kills four
Ternate earthquake triggers tsunami alert, leaves one dead
Magnitude 5 earthquake hits eastern Japan without tsunami

Categories

  • Automotive
  • Business
  • Entertainment
  • Health
  • Lifestyle
  • Luxury
  • News
  • Sports
  • Technology
  • Travel
© 2026 UAE Brief | All Rights Reserved
  • Home
  • Contact Us