Elon Musk’s artificial intelligence chatbot Grok has come under intense criticism. The AI chatbot allegedly spread inaccurate and misleading information about the deadly Bondi Beach attack in Sydney. The episode has once again raised concerns over the reliability of generative AI tools during rapidly evolving breaking news situations.
The controversy started on December 14, when users on the social media platform X questioned Grok’s responses to images and videos related to the shooting at a Hanukkah gathering near Bondi Beach. Australian authorities classified the incident as a terrorist attack, confirming that at least 16 people were killed, including one of the assailants.
According to a report by Gizmodo, Grok repeatedly misidentified widely circulated footage tied to the attack. In one instance, when asked to explain a video showing Al Ahmed, a bystander credited with confronting one of the attackers, the chatbot incorrectly described the clip as an old viral video of a man climbing a palm tree in a parking lot. It also questioned the clip’s authenticity and claimed there was no confirmation of injuries.
In another example, Grok reportedly mislabelled an image of injured Al Ahmed as that of an Israeli hostage taken during the October 7 Hamas attacks. Additionally, footage showing a police shootout with the attackers was mistakenly identified as video from Tropical Cyclone Alfred, which struck Australia earlier this year. Additional investigation found that Grok appeared to mistakenly identify a man named Edward Crabtree as the person who disarmed the gunman.
Although Grok has since corrected at least one inaccurate post after what it described as a ‘reevaluation’. Neither xAI, the company behind the chatbot, nor Elon Musk has publicly responded to the broader concerns raised by these errors.
The Bondi Beach attack occurred during a Hanukkah gathering, where police say a father and son, aged 50 and 24, opened fire on attendees. Investigators revealed that the father legally owned six firearms believed to have been used in the shooting. Authorities also stated that both suspects had pledged allegiance to the Islamic State group, with two IS flags reportedly recovered from a vehicle near the scene.
Also Read: Grok Controversy: Can Elon Musk's xAI Keep its AI Chatbot in Check?
Experts warn that these types of situations illustrate the major issues associated with depending on AI chatbots for emergency updates in times of crisis. The incident highlighted an urgent need for improved verification systems, security, and responsibility as AI becomes more integrated into our everyday lives. It also raises individual questions: should you be getting news from AI?