

Artificial intelligence company xAI has introduced a new fact-checking feature in its chatbot Grok. The move is the company’s attempt to tackle misinformation on social media. The tool enables X users to verify post claims through the Grok icon, which they can access from the content.
X enables users to verify the truthfulness of viral posts within two seconds. The company introduced the tool as AI-generated content, and manipulated media have made it harder for users to identify authentic information online.
According to xAI, the new tool enables users to check the credibility of posts directly from the platform. By tapping the Grok icon in a post, users can receive a breakdown of the content’s claims and contextual information generated by the AI system.
The feature analyzes the text, captions, and engagement patterns surrounding a post to evaluate whether the information appears credible. Musk initially said the button would appear on the left side of posts, though Grok later clarified that the icon is positioned on the right side.
The company has developed this verification tool to help users complete their verification process at a quicker pace, which becomes essential when social media platforms experience rapid distribution of false information and popular content.
The company has developed this verification tool to help users complete their verification process at a quicker pace, which becomes essential when social media platforms experience rapid distribution of false information and popular content.
The new initiative does not stop people from questioning Grok because his previous answers have been criticized, while they doubt the accuracy of AI-based fact-checking systems. The new initiative does not stop people from questioning Grok because his previous answers have been criticized, while they doubt the accuracy of AI-based fact-checking systems.
In one instance last year, the chatbot unexpectedly referenced claims of ‘white genocide’ in South Africa during unrelated conversations, including discussions about a baseball player’s salary. The claim has been widely dismissed as unfounded.
xAI later attributed the incident to an unauthorised modification in Grok’s prompts and promised stronger safeguards along with increased transparency about system updates.
The chatbot also faced backlash after suggesting Adolf Hitler in response to a query about addressing ‘anti-white hatred.’ The company later described the response as an unacceptable error from an earlier model iteration.
Also Read: Grok’s ‘Iran Strike Prediction’ Goes Viral, Facts Tell Another Story
Experts state that AI tools can assist in identifying false statements but require human judgment to make accurate assessments. The primary obstacle to progress in this field consists of ‘AI hallucinations’, which enable chatbots to produce highly confident yet false information.
The AI models create these hallucinations because they generate output through pattern recognition based on their training data without any capability to fact-check information. The problem exists in various AI systems, which include widely used chatbots.
The social platforms test AI-based moderation and verification tools to determine their success, which depends on the system’s ability to integrate automated processes with human monitoring.