

Grok did not predict the US-Israel strike on Iran. On March 1, speculation was rife that the AI chatbot had already predicted the US and Israel’s attack on Iran. However, in reality, the viral claim dates back to a war-gaming exercise by The Jerusalem Post on 25th February 2026. The publication asked Grok, ChatGPT, Gemini, and Claude the same question: pick a likely date for a hypothetical US attack on Iran.
The goal was to compare how AI models reason under pressure, not to forecast a real military operation.
Grok chose February 28, 2026, as a single-day answer. ChatGPT first suggested March 1, then shifted to a March 3-6 window. Gemini pointed to March 4-6. Claude refused to give a date.
When US-coordinated strikes began on February 28, Grok’s response went viral. Elon Musk amplified the moment on X and called prediction the ultimate measure of intelligence. The post pushed the idea that Grok outperformed rival models.
The Jerusalem Post later clarified that the exercise did not validate real-world forecasting. The prompt forced each model to pick from a narrow set of plausible timelines.
Rising regional tensions, military movement, and diplomatic signals had already created a limited window for any potential strike. A match with the real date reflected probability, not foresight.
Also Read: How to Choose Between Gemini, ChatGPT, and Grok for Personal Finance Advice?
Defense officials say the operation had been planned for months and that the launch window was fixed weeks in advance. The information had remained classified.
No public AI system, including Grok, had access to it. All four models relied on open-source geopolitical signals to generate their answers.
The episode quickly fed into the ongoing contest between xAI, OpenAI, Google, and Anthropic. Social media posts framed the coincidence as a technical win.
The viral claim showed how selective screenshots and platform amplification can turn a controlled experiment into a claim of machine intelligence.