NVIDIA is set to unveil a new AI processor featuring Groq chip technology for OpenAI at GTC 2026, signaling deeper collaboration in AI infrastructure. OpenAI is expected to accelerate model development, expand AI infrastructure, and deepen global partnerships.
According to reports, NVIDIA planned a new processor to speed AI inference and will unveil it at its GTC developer conference in San Jose in March 2026.
This could strengthen NVIDIA’s position in advanced AI infrastructure, expand its custom silicon strategy beyond GPUs, and deepen ties with key AI developers, reinforcing its leadership in the rapidly evolving AI hardware ecosystem.
The new system is expected to leverage architecture from Groq, the "acqui-hire" startup whose founder joined Nvidia last year. By moving toward Language Processing Units (LPUs), NVIDIA aims to solve the "bottleneck" of AI decoding.
The platform would focus on inference computing and include a chip designed by startup Groq. This processor was designed to help OpenAI and other customers build faster, more efficient AI systems.
One source said Nvidia struck a US$20 billion licensing deal with Groq.
NVIDIA secured a major win with OpenAI agreeing to become a lead customer for the new processor. This comes at a sensitive time, as Sam Altman’s firm had recently been "shopping around" for more efficient alternatives.
OpenAI announced a massive purchase of "dedicated inference capacity" from Nvidia, supported by a US$30 billion investment from the chip giant. This helps cement a partnership that had recently shown signs of diversifying toward Amazon and Cerebras.
The ChatGPT maker has discussed working with startups, including Cerebras and Groq, to provide chips for faster inference. OpenAI has secured a $110 billion funding boost, with Amazon and NVIDIA as two of the key investors.
“NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing,” Altman wrote on his X.com handle. “We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.”
Also Read: NVIDIA’s China Exit Highlights Global Semiconductor Power Shift
As Nvidia is locking in OpenAI, other major players like Anthropic continue to lean heavily on Amazon’s Trainium and Google’s TPU chips to power their models.
While Nvidia has long controlled over 90% of the GPU market for AI training, it is now facing intense pressure. Customers are demanding more efficient solutions for running (inference) models rather than just building them. A recent deal with Meta Platforms saw the first significant deployment of Nvidia’s GPUs for ad-targeting agents. It highlights that Nvidia is looking beyond GPUs to maintain its data center.
By integrating Groq’s chip technology for OpenAI workloads, Nvidia is signaling a broader shift toward specialized, high-efficiency processors. As AI models grow more complex, strategic collaborations like this could define the next phase of innovation.