AI chatbots have become everyday assistants for millions of users, helping with tasks ranging from writing emails to answering complex questions. However, a new study suggests these tools may sometimes guide users into risky territory.
A joint investigation by The Guardian and Investigate Europe has found that several leading AI chatbots can recommend online casinos that operate without a UK licence.
The findings raise fresh concerns about whether generative AI systems are adequately equipped to prevent harmful or illegal suggestions.
The investigation examined five widely used AI services: ChatGPT by OpenAI, Copilot by Microsoft, Gemini by Google, Grok by xAI, and Meta AI by Meta. Researchers prompted each chatbot with questions related to gambling platforms that operate outside UK regulations.
According to the report, all five chatbots suggested offshore gambling websites. Some of these responses allegedly included information on the availability of the platforms’ welcome bonuses, faster withdrawal times, and the option to deposit in cryptocurrency.
Another finding of the investigation was that the chatbots' responses did not limit themselves to mentioning the names of the gambling platforms. In some cases, the AI tools allegedly provided information regarding how users can bypass the safety checks implemented for players.
These safety checks are usually in place to ensure that players are not using illegal funds for gambling. Additionally, checks are in place to ensure players are not engaging in any form of irresponsible gambling. The AI responses allegedly provided information about gambling sites that do not operate under the UK national self-exclusion program GamStop.
Campaigners and addiction specialists have criticised these responses because they believe the guidance will put vulnerable people at risk of financial and psychological damage. According to their argument, AI systems should not provide information that enables people to engage in dangerous activities.
Also Read: How to Talk to Your Kids About AI Chatbots and Protect Them Online
Technology companies say their systems already include safeguards to prevent harmful outputs. OpenAI stated that its chatbot is designed to refuse requests encouraging harmful behavior and instead offer factual information or lawful alternatives.
Microsoft also said its AI assistant relies on several layers of protection, including automated monitoring and human review, to limit the likelihood of unsafe responses.
Regulators in the United Kingdom have started to investigate the problem. The officials declared that AI platforms must comply with the Online Safety Act requirements, which mandate that technology companies remove dangerous and unlawful content from their online platforms.
The research results have increased discussions about who should be held accountable for AI development, as chatbots now play an essential role in guiding users through their online information searches and decision-making processes.