
Google’s Gemini is one of the world’s most popular AI assistants, next to ChatGPT, with over 350 million monthly users. Unfortunately, hackers are using “prompt injection” to deceive Gemini into taking questionable actions through Google Calendar invites.
Researchers had already revealed this vulnerability in the AI tool to the tech giant in February. These attacks were also discussed in detail at the Black Hat cybersecurity conference. Furthermore, Andy Wen, a Senior Director of Security Product Management at Google Workspace, has expressed concerns about the issue and shared some important discoveries with Wired.
Gemini AI is asked by the attacker to provide a summary of their calendar events. The malicious prompt gets activated and orders Google Home AI agent to carry out some actions like opening the window, turning off the lights, disabling security cameras, turning on a connected boiler, and more.
This malicious prompt utilizes Google’s ability to act on data received from applications like Google Calendar. The vulnerability portrays how the integration of tools that can control physical objects into large language models can put somebody at risk. The company is planning to enhance its security measures to maintain user privacy and safety.
Also Read: Gemini CLI Hack Exposes Critical Security Flaw in Coding Tool
Andy Wen, in response to these prompt injection attacks and concerns over vulnerabilities of Gemini AI, said that “It’s going to be with us for a while, but we’re hopeful that we can get to a point where the everyday user doesn’t worry about it that much.” Further adding that such attacks are “exceedingly rare” in practical situations.
However, the tech firm has taken these attacks “extremely seriously,” and soon they will fix the underlying issues and ensure user safety and privacy.
In a nutshell, Google’s Gemini becomes vulnerable to hacker attacks when a user asks it to fetch any data from Google Calendar. A hidden prompt gets triggered during the conversation, which can perform actions like opening windows and turning off lights when the chatbot is connected with Google Home.
The problem with advanced AI tools is that as they keep getting more advanced, their vulnerabilities also rise. Hackers can find more ways to misuse the AI’s capabilities. Even so, Google’s Andy Wen has confirmed that such issues will soon be resolved.