

A quiet change in how Google’s cloud services interact has opened an unexpected security gap, putting thousands of organisations at risk of data exposure and mounting AI bills. Security researchers have found that publicly visible Google API keys, once considered low risk, can now be used to access Gemini AI if the generative AI service is enabled in the same cloud project.
Nearly 3,000 such keys are estimated to be active across websites and public code repositories, including those linked to financial institutions, technology firms, and recruitment platforms.
For years, developers embedded these keys in apps and webpages for services like Maps or Firebase, relying on Google’s guidance that they were not sensitive credentials. That assumption no longer holds.
The risk emerged after Gemini and the Generative Language API were introduced into Google Cloud. When the AI service is switched on in a project, older API keys tied to that environment automatically gain the ability to send requests to the AI model.
This means a key that was visible to anyone on the internet can now be used to interact with Google Gemini, sometimes accessing stored prompts, uploaded files, or cached responses, depending on configuration. What developers treated as a public label has quietly become a functional login.
The attack itself requires little technical skill. In many cases, a key can be copied from a webpage’s source code and used immediately.
Beyond the threat to data, the financial fallout could be severe. Misused AI keys can quickly consume usage quotas and generate large bills, potentially disrupting services that depend on the same cloud resources.
Cybersecurity experts say the episode reflects a larger problem in the AI transition. Older security practices are being outpaced by new capabilities. Developers who followed earlier documentation now find themselves exposed without having changed a line of code.
Also Read: How to Secure APIs in 2026: 7 Essential Best Practices for Developers
Google has begun blocking leaked keys from accessing Gemini, tightening default restrictions, and planning user notifications. But legacy deployments, especially mobile apps and long-running web services, remain difficult to fix overnight.
The incident serves as a reminder that in the generative AI era, risk is no longer static. As cloud platforms evolve, even routine configurations can acquire new power and new consequences without warning.