Prosecutors in the United States are questioning Meta over the time it took to introduce protections for teenagers on Instagram, as part of an ongoing federal lawsuit examining whether major social media platforms are designed in ways that harm young users.
The focus of the latest deposition was on why tools that automatically blur explicit images in teen accounts were rolled out only in April 2024, despite internal awareness of the risks years earlier.
During questioning, Instagram head Adam Mosseri was asked about company emails dating back to 2018. Documents presented in court showed executives had acknowledged that sexually explicit images and other inappropriate material could be shared through Instagram’s direct messaging feature.
Lawyers argued that the long gap between problem-based adverse effects and solution discovery raises concerns about whether engagement and growth were prioritised over child safety.
Mosseri rejected the claim that Meta should have warned parents that private messages were not proactively monitored beyond the removal of child sexual abuse material. He maintained that the company has consistently tried to balance user privacy with safety, adding that harmful content can be circulated on nearly any messaging platform.
Meta spokesperson Liza Crenshaw said the company has spent years working with parents, experts, and law enforcement to strengthen teen protections. She pointed to parental controls and the introduction of Teen Accounts as steps taken to improve safety, while noting that the strategy continues to evolve.
Survey data discussed during the proceedings indicated that nearly one in five users aged 13 to 15 reported receiving unwanted sexual imagery on the platform. A smaller but significant share also said they had encountered self-harm-related content within seven days of using the app.
Also Read: Safety First: Meta Halts Teen AI Interactions Across Platforms
The lawsuit, being heard in the US District Court for the Northern District of California, also involves YouTube, TikTok, and Snap. It seeks to determine whether core design features of these platforms promote addictive behaviour or expose minors to unsafe experiences.