

Whistleblowers have accused TikTok and Meta of risking user safety to win the battle for engagement, according to a BBC documentary examining the inner workings of social media algorithms. Insiders claim company research showed that posts that triggered outrage drove higher engagement, prompting decisions that allowed more harmful content to surface in users’ feeds.
More than a dozen current and former employees described intense pressure to match TikTok’s rapid rise. A former Meta engineer said senior leaders instructed teams to allow more ‘borderline’ harmful material, including misogynistic content and conspiracy theories, into feeds to improve engagement metrics.
At TikTok, a trust and safety employee claimed moderation teams struggled with overwhelming caseloads. He said managers sometimes prioritized complaints involving political figures over reports from teenagers facing cyberbullying or sexual exploitation. The whistleblower linked these decisions to efforts to avoid regulatory action and maintain government relationships.
Former Meta researcher Matt Motyl shared internal documents that flagged higher levels of harmful comments on Instagram Reels after its 2020 launch. The research showed that Reels contained more bullying and harassment incidents, hate speech, and incitement compared to the main Instagram feed.
Other company studies reportedly warned that engagement-focused algorithms amplify emotionally charged posts. Recommendation systems show users similar content because people tend to react more strongly to material that elicits anger or moral outrage.
Meta denied deliberately promoting harmful content and said it invested heavily in safety tools and policies. TikTok called the allegations ‘fabricated’ and pointed to preset safety features for teen accounts, stricter recommendation rules, and technology that blocks harmful videos before users see them.
Also Read: Facebook, Instagram, TikTok Down: Thousands Report ‘Account Unavailable’ Error
The documentary demonstrates how social media algorithms affect online user behavior, a focus of growing investigative research. Platforms that shape the information consumption patterns of more than two billion people across major global markets face mounting pressure from regulators to implement improved systems of monitoring and responsibility.