Meta’s child protection claims receive a fatal blow. Meta has always claimed that it has implemented multiple tools to protect child safety on its platforms. However, a recent report says, 30 out of 47 tools are ineffective. It is unclear whether these tools are completely ineffective or have been discontinued. Only eight tools work correctly.
A new independent study has revealed this shocking report, targeting mostly Instagram safety tools, which Meta claimed to protect teenagers from abuse. The research was conducted by child-safety advocates in collaboration with Northeastern University. This failure questions whether Instagram’s protective framework for teens is more cosmetic than real.
The study, titled, ‘Teen Accounts, Broken Promises,’ prominently states that the safeguards that Instagram advertises to protect children often fail under real-world conditions. The researchers created mimic accounts, such as those of teenagers, parents, and malicious adults, to run the test.
The report says that search filters that are introduced to block harmful terms that are associated with self-harm or eating disorders could be easily bypassed with minor spelling changes. Similarly, anti-bullying filters don’t block known abusive words. Instead, they easily appear in direct message requests.
Other tools designed for teens to prevent exposure to harmful content often fail in real-life scenarios. Even Instagram’s claim to prohibit unwanted adult contacts is also equally ineffective, as adult accounts continued to appear in teens’ suggested contacts and follower lists.
There are shortcomings, but the positive aspect is that some tools still function. Notably, the Quiet Mode and stricter parental approval settings function as intended. However, the flaws easily overshadowed these positive sides.
Most importantly, the report expresses, “when a minor experiences unwanted sexual advances or inappropriate contact, Meta’s own product design inexplicably does not include any effective way for the teen to let the company know of the unwanted advance.”
Also Read: Meta Introduces AI-Powered Dating Assistant: Here’s What You Need to Know
Meta quickly defended itself, calling the report “misleading, dangerously speculative.” Company spokesperson, Andy Stone, claimed that the study mischaracterizes its safety initiatives and exaggerates flaws by holding Instagram to standards that the platform never promised.
Stone claimed that teenagers who actually use these safety measures on Instagram “saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night.”
However, Meta carefully avoided the percentage of parents who actually utilize this parental control tool. Additionally, it has been noted that some tools mentioned in the report as eliminated are actually integrated into other tools.
These findings highlight a fundamental tension: tools are insufficient to safeguard children, especially within a system that relies on algorithmic engagement. While Meta insists its safety tools work correctly when the situation demands, studies reveal a completely different scenario with flaws and loopholes.
Finally, Instagram’s study report highlights a larger challenge: whether tech giants truly prioritize teen safety when revenue is generated by keeping them online for longer periods.