Google has introduced two new chips in its eighth-generation Tensor Processing Unit (TPU) lineup. This is an important effort to meet the demand for processing AI workloads. The two chips, TPU 8t and TPU 8i, have been designed for different purposes, reflecting the growing demand for hardware specialization.
Both chips run on Google’s Axion ARM-based CPU host and use advanced liquid cooling systems. This combination improves performance while keeping energy consumption under control. The company said the new TPUs form part of its broader full-stack infrastructure, spanning networking, data centers, and energy-efficient operations.
TPU 8t by Google is called the ‘training powerhouse’ as its main purpose is to speed up the training of large AI models. Training large AI models typically requires substantial computational resources and time.
The firm asserts that the chip offers nearly three times the computational power of the previous one. This increase will reduce the time required to train AI models from months to weeks.
TPU 8i functions as an inference workload processor, which Google defines as a ‘reasoning engine.’ The system functions as a foundational component for advanced AI systems that multiple agents need to use for their ongoing interactions.
TPU 8i from Google provides greater memory bandwidth, enabling better performance for critical operations that require low latency. The problem becomes critical for AI systems operating at a large scale because even brief interruptions can cause severe operational disruptions.
Also Read: Merck and Google Cloud Launch Up to $1 Billion AI Partnership
It is worth noting that although these two chips can perform different tasks, Google highlighted their advantages due to their unique designs. The tech giant sees it as a breakthrough in developing fast, agent-based intelligent systems for global use.