Google has unveiled TranslateGemma, a new family of open translation models designed to run locally rather than through a closed cloud service. Built on the company’s Gemma 3 architecture, it supports 55 fully tested languages and targets developers, researchers, and enterprises that want speed, flexibility, and data control.
Unlike mainstream translation tools that rely on remote servers, TranslateGemma allows users to deploy models on devices, private servers, or custom infrastructure. Google positions it as part of a broader push toward open-weight AI systems to reduce dependency on proprietary platforms.
Google has made it clear that TranslateGemma is an option to closed translation systems such as the tools provided by ChatGPT. On the other hand, cloud apps are functional and flexible but entail the text being sent to outside servers.
TranslateGemma flips that model; by releasing open weights, Google enables developers to download, inspect, fine-tune, and run the models wherever they choose. Deploying locally is a substantial benefit for privacy and GDPR/OECD compliance concerning organizations processing data or operating in areas with uncertain connectivity.
TranslateGemma launches in three sizes: 4B, 12B, and 27B parameters.
The smaller 4B model can run on smartphones and edge devices.
Mid-sized and large variants target cloud environments and enterprise-scale deployments.
Google says its training approach combines supervised fine-tuning with reinforcement learning, using both high-quality human translations and synthetic data. The goal is to reduce errors across widely spoken and low-resource languages without sacrificing efficiency.
Also Read: Google Veo 3.1 Update Brings Easy Vertical AI Video Creation for Social Media
At launch, TranslateGemma supports 55 fully evaluated languages. However, Google says the models are already training on nearly 500 additional languages, paving the way for future expansions. Thanks to Gemma 3’s multimodal capabilities, this tool can also translate text embedded inside images without extra training.
The models are available through Kaggle, Hugging Face, Google Colab, and Vertex AI, alongside a detailed technical report covering benchmarks and methodology.
With TranslateGemma, Google makes a clear bet on open, locally deployable translation AI, challenging closed systems and offering more control to developers over performance, privacy, and infrastructure.