

As AI technology plays an increasingly prominent and integrated role in the adolescent experience each day. More teens are engaging with chatbot services and AI assistants than previously, to support learning, provide entertainment and enable social interactions with others.
This new technology creates great opportunities for adolescents; however, it also introduces some additional exposure to inappropriate content and creates potential mental health issues.
To mitigate the concerns associated with teen usage of this technology, OpenAI and Anthropic have rolled out the newly developed safety features intended specifically for middle school and high school-aged children. These new safety features include enhanced parental controls and stricter age screening processes.
OpenAI has moved forward with its modeling strategy regarding the increased safety of young individuals through the addition of U18 Principles. These principles articulate the ways in which ChatGPT will engage and communicate with users between the ages of 13 and 17 in a responsible manner and in accordance with their age.
Users between this age range will have priority when it comes to ensuring their interaction with the AI is safe, provides human support, and blocks content that is unsafe; this includes subjects and behaviors such as self-harm, sexual role play, or engaging in dangerous challenges.
As well as having model-level safeguards, OpenAI has made available to parents options for controlling their child's AI usage and experience. Using these options will allow parents to connect their child's profile, set hours of usage and block sensitive content from being accessible to their children.
These additional features not only assist in establishing the proper digital habits for children but also ensure that children continue to receive the benefits of AI for education and other creative purposes.
Anthropic has adopted a stricter guideline that requires a Claude user to be at least 18 years old. The company is enhancing its systems to detect underage use through conversational signals, self-disclosure, and automated classifiers to implement this policy. Accounts can also be reviewed and disabled if they are suspected of being owned by minors, which helps minimize the risk of inappropriate interactions.
The company is also refining how Claude responds to sensitive topics, especially those involving suicidal thoughts or self-harm. Instead of using AI as emotional support, responses encourage getting help from trusted people or professional resources. This approach indicates a cautious measure that prioritizes human intervention over AI advice in high-risk situations.
These initiatives arrive amid growing scrutiny of AI tools and their effects on teenage mental health. A study by Stanford Medicine and Common Sense Media shows that common chatbots still struggle to provide safe, consistent responses for mental health. Such revelations highlight gaps in previous safety measures and require targeted protections for younger users.
Teachers and pediatric psychologists have also raised concerns about teens getting emotionally attached to AI systems. If the necessary precautions are not taken, chatbots will be considered as alternatives to human assistance, which could be harmful to kids and teenagers during their developmental stages. Safety-focused design is being increasingly recognized as a necessity rather than an option.
Regulatory and social pressure are the main factors driving these changes. Lawmakers and advocacy groups have called for stronger age verification and accountability following incidents that exposed safety weaknesses. OpenAI’s recent updates, including parental controls, followed increased legal and public scrutiny in the United States.
Overall, the recent measures for teen safety are a clear improvement over previous ones. OpenAI and Anthropic are setting new expectations for responsible AI development through their age and risk-based protection approach. Continuous research, expert collaboration, and periodic updates will determine how future AI systems interact with children.
Also Read: Claude Cutoff: Anthropic Revokes OpenAI's API Access Ahead of GPT-5 Launch
Recent moves by OpenAI and Anthropic highlight a significant step forward in protecting teen AI users. The combination of age-based rules, parental controls, and stricter verification helps manage risks while providing safe, responsible AI interaction for kids.
AI will be a significant factor in teenagers' daily lives, so research and collaboration are essential. These improvements indicate that AI can be both beneficial and safe for younger generations when properly designed and regularly updated.