Nvidia Unveils 'Rubin' its Next-Gen AI Chip Platform Launching in 2026

Nvidia’s Rubin AI chip platform, launching in 2026, integrates advanced GPUs, Versa CPUs, and HBM4 memory for superior AI performance and energy efficiency.

Punam Singh
New Update

Nvidia unveils its next-generation AI chip platform called “Rubin” which is likely to roll out in 2026. The platform was announced by Nvidia CEO Jensen Huang at Computex 2024.  This new platform will include new graphics (GPU) and central processors (CPU), with the CPU being named ‘Versa’.


The Versa CPU in Nvidia’s Rubin platform is designed to handle complex AI computations making it ideal for various applications from data centers to edge computing. This CPU is a key component of the Rubin platform which integrates advanced graphics processing units (GPUs), central processing units (CPUs), and networking chips to enhance AI applications with superior performance and efficiency.

The Ruby family of chips will also incorporate networking chips and next-generation high-bandwidth memory from companies like SK Hynix, Micron, and Samsung.

This move has come as Nvidia continues to dominate around 80% of the market for AI chips, positioning itself as a key player in the rapidly evolving field of AI technology. 


Key Features of the Rubin AI Chip Platform

  • Integration of new graphics processing units (GPUs) and central processing units (CPUs), along with networking chips.
  • Incorporation of next-generation high bandwidth memory supplied by SK Hynix, Micron, and Samsung to boost the performance of AI tasks.
  • Use of HBM4 (High-bandwidth memory), the latest iteration of essential memory technology.
  • Will be focused on energy efficiency to deliver high performance while minimizing power consumption, suitable for both data centers and edge devices.
  • Versatile to handle diverse AI workloads across various applications like autonomous vehicles, smart cities, and consumer electronics.


Nvidia's upcoming Rubin AI chip platform, set for release in 2026, marks a significant milestone in the evolution of AI computing. The integration of advanced graphics processing units (GPUs), central processing units (CPUs), and networking chips, along with the incorporation of next-generation high-bandwidth memory (HBM4), positions the Rubin platform to deliver unparalleled performance and efficiency for AI applications.

The Versa CPU, designed specifically for complex AI computations, and the HBM4 memory, offering unprecedented bandwidth and capacity, will enable the Rubin platform to handle the most demanding AI workloads. This cutting-edge technology will be crucial for accelerating AI training and inference tasks, and driving innovation in fields such as autonomous vehicles, smart cities, and consumer electronics.