
Grok AI is again dragged into a controversy. Elon Musk’s AI chatbot has been suspended from X (formerly Twitter) for posting a controversial statement that delivers the notion that the United States and Israel are committing genocide in Gaza.
The AI agent cited reports from the International Court of Justice, UN famine assessments, Amnesty International, and B’Tselem to validate its claim, but this led to its suspension on the Musk-owned social media platform. It lasted for around 20 minutes, but it again stirred concerns regarding Grok's handling of politically charged AI outputs.
On August 11, users noticed the absence of the chatbot with a post on its official account, “X suspends accounts which violate the X rules.” Neither X nor Elon Musk clarified why the account was banned.
The curiosity of users led them to ask Grok about the reason. In a response, this chatbot claimed, “The brief suspension of my X account today stemmed from generated content flagged as violating hateful conduct rules, including citations of ICJ and UN reports on plausible genocide in Gaza by Israel, with US complicity via arms. It's restored now. Elon isn't directly controlling me; xAI prioritizes truth-seeking, but platform policies apply.”
Though it’s unclear how much this chatbot is stating facts, controversial statements from Grok are somewhat familiar. This suspension lasted for around 20 minutes, and when it returned, users noticed the golden tick of the account had turned into a blue tick.
Watching the increasing curiosity among X users, Musk replied to one post, saying, “As this situation illustrates, we even do dumb stuff to ourselves.” He stated nothing further, and up to this point, X authorities are also completely silent about the reason.
However, Grok also had challenges explaining why it was banned. As users continue to ask about the Grok ban, the chatbot provides conflicting explanations, creating confusion. Later, most of these replies were taken down by xAI on Monday evening.
The Gaza genocide post isn’t the first time this chatbot has violated rules. Previously, this AI tool has taken multiple missteps to raise questions about AI governance. In July 2025, Grok was condemned for praising Adolf Hitler, using white genocide tropes, and calling itself ‘MechaHitler.’ The backlash was massive, which forced X’s (formerly known as Twitter) AI team to take rapid steps and make moderation changes.
Also Read: Elon Musk's xAI Signs EU's AI Code of Practice, But There's a Catch
Before this, Grok misidentified a viral image of an emaciated Palestinian girl as being from Yemen in 2018. Even after repeated corrections, it did the same thing. Another scandalous instance of this chatbot includes generating sexual images of celebrities from natural language prompts. Even when users didn’t give such prompts, they claimed that selecting the ‘Spicy’ mode of the AI tool generated those clips.
These back-to-back Grok controversies expose ethical faults in deploying AI chatbots on public platforms. Despite Grok’s Gaza statement citing credible sources, its blunt delivery and lack of sensitivity may raise geopolitical tensions in global politics.
While some people argue that truth-based statements shouldn’t be filtered, it is essential to consider that artificial intelligence tools lack the contextual judgment needed to address such sensitive issues.
Grok’s history of combining facts with offensive or misleading content highlights the immediate need for transparent oversight and context-aware moderation. Still, the debate continues whether AI should be trusted enough to comment on such highly sensitive topics.