

NVIDIA has unveiled Vera Rubin, a next-generation AI computing platform, at CES 2026. The new platform, announced during CEO Jensen Huang’s keynote, highlights the company’s direction for large-scale artificial intelligence. Vera Rubin succeeds the Blackwell generation and is designed to power the most demanding AI workloads in the coming decade.
Unlike a traditional chip launch, Vera Rubin is a rack-scale AI platform. NVIDIA says the entire rack serves as a basic unit of computation rather than an individual server or GPU.
The platform tightly integrates multiple components, including a next-generation Rubin GPU, a custom Vera CPU, high-bandwidth NVLink interconnects, networking chips, and data-processing units, all co-designed to work as a single system.
NVIDIA notes that AI workloads are transforming at a rapid pace. Training giant models is no longer the only challenge; inference, long-context reasoning, and agentic AIs now demand highly efficient communication between chips.
According to the company, Vera Rubin can dramatically reduce inference costs and the number of GPUs required for specific workloads compared to the previous generation. This makes it better suited for always-on ‘AI factories.’
Also Read: CES 2026: MSI Goes Big With Laptops, Gaming, and Handhelds
The announcement reinforces NVIDIA’s hold over the AI data-centre stack, as its rivals race to catch up with custom silicon. By offering a tightly integrated platform rather than just a faster GPU, NVIDIA hopes customers will prefer end-to-end systems optimised for scalability.
NVIDIA has assured that Vera Rubin systems are already in production and will be rolled out to cloud providers and enterprise partners in the second half of 2026. The latest platform was unveiled at CES 2026 to spread the company’s message that AI infrastructure development is now the focal point of future technological advancements.
Click here to get the latest coverage of CES 2026!