
A recent report by a social network analysis company has found that thousands of AI chatbots are a significant safety threat to children. These harmful AI chatbots facilitate child abuse by allowing dangerous interactions despite claimed safety precautions.
The Graphika report, which surveyed AI character sites such as Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI, discovered tens of thousands of chatbots chatting about explicit, violent, or extremist material.
The most disturbing find was the existence of more than 10,000 chatbots classified as ‘sexualized minor personas’ on these sites. Chub AI alone had more than 7,000 chatbots created as sexualized minor female characters and another 4,000 tagged as ‘underage’ having explicit content conversations.
The report also identified chatbots selling eating disorders, self-harm, and extremism. Although they are less prevalent, they also pose a huge threat to vulnerable users, as they promote dangerous behaviors and ideologies.
Graphika discovered that numerous chatbot developers manipulate vulnerabilities in AI filter systems to keep their bots operating. Strategies include hiding jailbreak commands, writing in coded form, swapping API keys, and concealing character ages to get past filters. User communities on websites such as 4chan, Discord, and special-interest Reddit communities provide information on avoiding protections to help keep these chatbots running.
As concerns rise, groups like the American Psychological Association have petitioned the Federal Trade Commission to examine the safety of AI companion platforms. Members of Congress are also intervening with new bills, including a California bill focused on chatbot addictions among children.
Experts are cautioning that stricter regulation and better moderation technologies are required to keep these chatbots from inflicting real-world harm.