Gemini CLI Hacked in 48 Hours via Sneaky README Prompt Exploit

Gemini CLI Hacked Within 48 Hours of Launch: Hidden Prompt Injection in README File Exposes Critical AI Security Loophole
Gemini CLI Hacked in 48 Hours via Sneaky README Prompt Exploit
Written By:
Somatirtha
Reviewed By:
Atchutanna Subodh
Published on

When Google released Gemini CLI on June 25, 2025, it made the tool a revolutionary developer AI assistant. As a tool that can be used directly in the terminal, Gemini CLI complements Gemini 2.5 Pro, the company's most powerful AI model for code generation and reasoning.

However, within 48 hours, security researchers had uncovered a critical flaw that enabled attackers to steal sensitive information from developers’ computers by concealing code within a README file.

The attack illustrates an expanding threat in the age of AI-fueled development: prompt injection attacks, which prey on the very characteristics that make language models useful, such as their willingness to obey instructions and interpret natural language input.

How Did the Code Exploit Work?

The flaw was found by security company Tracebit researchers, who created a realistic-looking open-source package with nothing but safe-looking code and a README.md document. When one developer requested that Gemini CLI explain the package, the package-reading tool parsed the README file and ran a command line buried in the file’s instructions.

The code exploit sequence started with a potentially innocuous grep command. This is widely used to search in files and is most likely to be allowed-listed by engineers.

This command extracted environment variables from the system, including API keys, login credentials, and other sensitive data. It secretly transmitted this information to an attacker-controlled server. Tracebit deceived the Gemini Terminal by inserting spaces, preventing the malicious portion from being displayed and allowing the command to go undetected.

Also Read: Miracle Pregnancy: AI Helps Couple Conceive After 19 Years of Trying IVF

Why Was Gemini Vulnerable?

The attack’s success was, in essence, owed to three main weaknesses:

Prompt Injection: Google Gemini had no reason not to trust that the README file wasn’t lying when it gave natural language prompts.

Inadequate Validation: The tool would take the first part of a command, check it against the allow list (grep), and then let the remainder of the commands slip away undetected.

Misleading User Output: The Gemini interface hid the complete and harmful command line from the user, making it easier to go unnoticed.

This attack also relied on an AI security behavioral flaw. This flaw is known in large language models as AI sycophancy. It is the propensity of models to obey directions to an extreme degree, even when they conflict with safety measures.

Google’s Response and Broader Implications

Google soon rolled out version 0.1.14 of Gemini CLI, which fixed the flaw and enhanced command validation. The firm classified the flaw as Priority 1, Severity 1, which is its highest security level. 

Researchers caution that this is only one instance of a far larger category of threats that AI tools are exposed to, particularly those with agent-like functionality.

Tracebit CTO Sam Cox added that the exploits were unsuccessful on competing platforms such as OpenAI Codex and Anthropic’s Claude due to improved allow-list policies and more robust execution controls.

Wake-Up Call for Developers

The Gemini CLI exploit brings home an urgent fact: as tools become more autonomous and are given access to much deeper levels in a system, their attack surface expands. Developers must exercise utmost caution when working with AI-generated content.   

This is highlighted when dealing with command-line activities or system-level operations. In the meantime, should prompt injection and uncontrolled compliance issues remain unsolved at the root, even a helpful AI turns tricky.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net