Instagram's efforts to shield teenagers from damaging online content appear to be failing, according to a study from child safety organizations and cybersecurity researchers. The study claims that teens are still exposed to posts concerning suicide and self-harm despite the platform's purported protective measures.
Researchers found that 30 of the 47 safety features aimed at protecting young users were largely ineffective or had been discontinued. The report also criticized the platform for allegedly encouraging minors to produce content that attracts sexualized comments from adults.
Meta has rejected the study's conclusions, maintaining that the safety tools in place have effectively reduced the amount of harmful content seen by teenagers on Instagram. They contend that the findings are misrepresentative of their ongoing efforts to enhance safety.
A Meta spokesperson stated, Teen accounts lead the industry because they provide automatic safety protections and straightforward parental controls, arguing that the tools developed for young users have successfully helped mitigate risks.
The investigation was led by Cybersecurity for Democracy, with collaboration from various child advocacy organizations. Researchers created fake teen accounts to scrutinize Instagram's safety measures. Their analysis revealed serious flaws, including enabling searches for harmful terms and exposing teens to inappropriate content that violated Instagram's own guidelines.
Andy Burrows, CEO of the Molly Rose Foundation, highlighted a corporate culture within Meta that prioritizes engagement over user safety, reflecting a concerning trend in social media dynamics.
As social media companies continue to face scrutiny over child safety measures, the findings from this study indicate a significant gap in Instagram's protections for its youngest users, as advocates call for stronger regulations and accountability in online platforms.