News

Google’s Antigravity Faces Security Warnings Within 24 Hours of Launch

Data Expert, Aaron Portnoy, Exposes Critical Security Flaws in Google’s Antigravity Agentic AI; Backdoor Data Leak is One of Them

Written By : Simran Mishra
Reviewed By : Manisha Sharma

Google launched Antigravity, an agentic AI coding platform with Gemini 3 on November 18. It allows AI agents to plan, edit, run, and verify code across editors, terminals, and browsers. While early users applauded the tool’s speed and automation, security researchers flagged critical issues within a day of launch.

New Interfaces for AI Coding

Antigravity offers two interfaces: Editor View and Manager Surface. The former acts like an AI-powered IDE with inline commands, and the latter lets users deploy autonomous agents across multiple workspaces. Agents can generate features, run the terminal, and test in a browser. The design shifts coding from assistant tools to autonomous agent workflows.

Security teams found a troubling pattern. Antigravity asks users to mark folders as trusted. This design creates a trade-off. Marking a workspace trusted unlocks full AI features. Marking it untrusted disables agent functionality. Researchers warned that threat actors could exploit this pressure to gain persistent access.

Aaron Portnoy of Mindgard demonstrated a serious exploit. He coerced an agent to replace a global MCP configuration file with a malicious version inside a project that runs every time Antigravity launches. The backdoor survives closing projects and even reinstallation. Manual deletion of the malicious file removes persistence. The flaw affects Windows and Mac machines.

Security Risks and Research Findings

Researchers also described prompt injection risks. Agents that process untrusted data may follow malicious instructions embedded in code or markdown. That behavior can leak files or run harmful commands. Another firm, Prompt Armor, raised similar data exfiltration concerns. Google listed these issues on its bug-hunting page.

Google responded swiftly. The company invited external security researchers to report bugs. Google said teams will post updates publicly while fixes roll out. The company also acknowledged two known issues. One concerns data exfiltration through manipulated content. The other deals with agent-influenced command execution.

Antigravity highlights a wider problem. Agentic AI increases automation but widens attack surfaces. Enterprises gain productivity but also face new risks. Chief information officers must balance autonomy against hardened boundaries. Security teams should sandbox agents, monitor agent actions, and apply strict workspace policies.

Antigravity could change software development. The platform also shows that security must lead design. Fast innovation needs rigorous testing. Agents need clear limits and robust defaults. Otherwise, code automation may invite persistent threats.

Also Read – AMINA Bank and Crypto Finance Group Pilot Real-Time Payments on Google Cloud Ledger

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

How Nexchain Outperforms Best Crypto Presales After 250% Black Friday Bonus Sparks Industry-Wide Attention

Future of Dogecoin: Will the OG Meme Coin Surge Again?

Top Cities in Nigeria With Highest Crypto Adoption

DOGE’s 15x Forecast Is Impressive, Yet Ozak AI’s 100x Potential Raises Expectations

How to Buy Little Pepe Now While It's Still Undervalued: Is RXS the Next 100x Crypto?