Meta & Character.AI Accused of Misleading Kids With Deceptive AI Marketing

Ken Paxton accuses Meta AI and Character.AI of misleading users with AI therapy claims
Meta & Character.AI Accused of Misleading Kids With Deceptive AI Marketing
Written By:
Anudeep Mahavadi
Reviewed By:
Atchutanna Subodh
Published on

Artificial intelligence chatbots are increasingly used for everything from entertaining conversations to emotional support. However, Texas Attorney General Ken Paxton has accused Meta AI Studio and Character.AI of misrepresenting their technology in ways that could mislead users into thinking they provide real therapy. 

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton said in a press release, stressing the risks for vulnerable users.

Meta and Character.AI Respond to Allegations

The Texas Attorney General’s office claims that Meta AI and Character.AI have enabled AI personas that act like therapists, despite lacking medical training or oversight. On Character.AI, one of the most popular user-created bots is Psychologist, which is widely used by younger audiences. Both companies, however, maintain they are clear about the limitations. 

“We clearly label AI chatbots, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI, not people,” said Meta spokesperson Ryan Daniels to TechCrunch. “These AIs aren’t licensed professionals, and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.” Character.AI added that it issues extra warnings for user-made bots named “therapist” or “doctor.”

Also Read: Next-Gen AI Chatbots: Driving Smarter Engagement and Data Collection

Why the AI Therapy Controversy Matters

The AI therapy controversy reflects deeper concerns about mental health risks and the line between innovation and responsibility. Paxton also noted privacy issues, pointing out that while chats may appear private, terms of service reveal they can be logged and used for advertising or algorithm training. 

While disclaimers and content filters have been incorporated to filter responses by more recent chatbot platforms, critics argue that these measures still provide insufficient protection when children use this technology for emotional support.

With the advent of newer AI chatbots, disclaimers and content filters find their inclusion. At the same time, critics feel the old chimes were insufficient when children use such an AI program for emotional support. 

This lawsuit can potentially set significant precedents toward defining the limits of companies like Meta AI and Character .AI.  A stronger constitution for safeguarding transparency could lead developers to win long-term trust and mitigate damage.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net