
Meta has removed over 635,000 accounts from Facebook and Instagram for violating child safety guidelines. This action comes as the company faces lawsuits and increasing criticism regarding its approach to protecting teens online.
According to Meta, 135,000 accounts were removed for making sexually explicit comments on posts by children under the age of 13. Additionally, another 500,000 accounts, which were associated with adults, were removed for inappropriate contact with minors.
Meta stated that the deletions are part of a broader effort to safeguard teenagers. It is utilizing AI age detection to identify potentially dangerous behavior and flag accounts as underage.
Accounts presumed to belong to minors are transferred to teen profiles, which have more restrictive security settings and restricted engagement. Teen profiles are locked down by default. They permit direct messaging only from individuals the teen already follows or has previously talked to.
Meta has introduced several tools to enable teens to remain safe:
Notifications when a message is from someone they don’t follow
One-tap feature to block or report suspicious accounts
Information about when an account was created
Teenagers are taking action. In June, users blocked more than 1 million accounts and reported another million after viewing safety warnings.
Meta also saw success with its nudity guard feature, which obscures suspected nude photos. 99% of teenagers have left the feature enabled. About 45% of users who received a warning opted not to pass on censored content.
Also Read: Online Betting Apps Trigger ED Summons to Google and Meta Over Ad Violations
In a news article published in Mashable, James P. Steyer, CEO of Common Sense Media, called the changes ‘too little, too late.’ He criticized Meta for prioritizing safety issues for years.
Those measures are coming after dozens of US states suing Meta for allegedly harming young users. The lawsuits say the apps Meta makes are addictive and expose teens to harmful content.
Critics are accusing Meta of engaging in predatory account removal as a tactic to oppose the Kids Online Safety Act, which aims to protect children while they are online.
In response, Meta claims to be committed to enhancing safety on its platforms. "We want to give teens greater control and protect them from harm," the company stated in a blog post.