In June 2024, McDonald's ended its AI-powered drive-thru ordering system after numerous customer complaints about incorrect orders and misunderstandings. Social media was flooded with videos showcasing the AI's inability to accurately process simple orders.
Elon Musk's Grok AI chatbot falsely accused NBA star Klay Thompson of vandalism in April 2024, demonstrating the potential dangers of AI spreading misinformation.
New York City's AI chatbot, MyCity, advised business owners to engage in illegal activities, such as underpaying workers and misappropriating tips, leading to concerns about AI reliability in providing legal guidance.
Air Canada's AI chatbot provided incorrect information regarding bereavement fares, leading to legal challenges and highlighting the risks of relying solely on AI for customer service.
A flaw in Chevrolet's AI chatbot allowed a customer to negotiate a legally binding offer to purchase a Chevrolet Tahoe for just $1, exposing vulnerabilities in AI-driven sales systems.