News

AI Regulation Debate Intensifies After Chatbots Suggest Unlicensed Gambling Sites

Investigation Finds AI Chatbots Recommending Offshore Gambling Sites Without a UK Licence

Written By : Somatirtha
Reviewed By : Radhika Rajeev

AI chatbots have become everyday assistants for millions of users, helping with tasks ranging from writing emails to answering complex questions. However, a new study suggests these tools may sometimes guide users into risky territory.

A joint investigation by The Guardian and Investigate Europe has found that several leading AI chatbots can recommend online casinos that operate without a UK licence.

The findings raise fresh concerns about whether generative AI systems are adequately equipped to prevent harmful or illegal suggestions.

What the Investigation Found About Chatbot Responses?

The investigation examined five widely used AI services: ChatGPT by OpenAI, Copilot by Microsoft, Gemini by Google, Grok by xAI, and Meta AI by Meta. Researchers prompted each chatbot with questions related to gambling platforms that operate outside UK regulations.

According to the report, all five chatbots suggested offshore gambling websites. Some of these responses allegedly included information on the availability of the platforms’ welcome bonuses, faster withdrawal times, and the option to deposit in cryptocurrency.

Do AI Responses Provide Ways of Bypassing Safety Checks?

Another finding of the investigation was that the chatbots' responses did not limit themselves to mentioning the names of the gambling platforms. In some cases, the AI tools allegedly provided information regarding how users can bypass the safety checks implemented for players.

These safety checks are usually in place to ensure that players are not using illegal funds for gambling. Additionally, checks are in place to ensure players are not engaging in any form of irresponsible gambling. The AI responses allegedly provided information about gambling sites that do not operate under the UK national self-exclusion program GamStop.

Campaigners and addiction specialists have criticised these responses because they believe the guidance will put vulnerable people at risk of financial and psychological damage. According to their argument, AI systems should not provide information that enables people to engage in dangerous activities.

Also Read: How to Talk to Your Kids About AI Chatbots and Protect Them Online

How are Tech Companies Responding?

Technology companies say their systems already include safeguards to prevent harmful outputs. OpenAI stated that its chatbot is designed to refuse requests encouraging harmful behavior and instead offer factual information or lawful alternatives.

Microsoft also said its AI assistant relies on several layers of protection, including automated monitoring and human review, to limit the likelihood of unsafe responses.

Could Regulators Tighten Scrutiny?

Regulators in the United Kingdom have started to investigate the problem. The officials declared that AI platforms must comply with the Online Safety Act requirements, which mandate that technology companies remove dangerous and unlawful content from their online platforms.

The research results have increased discussions about who should be held accountable for AI development, as chatbots now play an essential role in guiding users through their online information searches and decision-making processes.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Trump Coin & Polkadot Crumble Under Market Pressure – BlockDAG’s $0.001 Aftersale Access & 140x Gap Expires in 24H!

How to Set Up Binance AI Agent Skills for Crypto Trading

Crypto News Today: Solana Processes $650B Stablecoin Volume, Bitcoin Slides on Oil Shock

24 Hours Left: BlockDAG Sees Buying Rush for $0.001 AfterSale Price and 140x Potential! TRON Rebounds, DOGE Holds Strong

Crypto Market Update: XRP Holds Support as $50.8 Billion Losses Mount