Meta declared on September 25 its plan to broaden its youth safety feature, Teen Accounts, across Facebook, Messenger, and Instagram users worldwide, impacting hundreds of millions of teens through default safety measures. Over the previous year, Meta has updated Teen Accounts by restricting communication and account discovery, filtering explicit materials, and deactivating the Live feature for users below 16 years. Meta regards Teen Accounts as a “major move to assist in keeping teens secure” and a means for parental reassurance, although some child safety specialists criticize it as a hollow promise.
A report published today accuses Meta’s Teen Accounts and associated safety features of not adequately protecting users. Entitled “Teen Accounts, Broken Promises,” the report discovered that many essential features, such as Sensitive Content Controls and tools aimed at preventing inappropriate interactions, did not operate as advertised. The analysis, conducted by Cybersecurity for Democracy and Meta whistleblower Arturo Béjar, involved collaboration with New York University and Northeastern University and child advocacy organizations like Fairplay, Molly Rose Foundation, and ParentsSOS.
The report seeks to inform parents who may think Meta’s safety statements guarantee their children’s protection on Instagram. It claims that the features are predominantly ineffective. Researchers evaluated 47 out of 53 safety features indicated by Meta, revealing that 64 percent were ineffective, nine had limitations, and only eight functioned properly. Tests indicated that adult accounts could still communicate with teens, and teens could message adults who did not follow them. Bullying messages circumvented restrictions, and teens were shown inappropriate content. Reporting sexual messages or material was ineffective.
Cybersecurity for Democracy co-director Laura Edelson clarified that the research emulated real-world platform interactions by predators, parents, and teens. She criticized Meta’s strategy as inefficient and misguided. Béjar likened Meta’s responsibilities to a car manufacturer needing to ensure that safety elements like airbags and brakes are operational. Josh Colin of Fairplay accused Meta of misrepresenting its initiatives.
Meta countered, asserting that the report mischaracterizes their actions and that Teen Accounts involve automatic safety measures and parental controls. They contended that teens utilizing these protections encountered less sensitive content and received fewer unwanted communications.
Maurine Molak of David’s Legacy Foundation and Ian Russell of the Molly Rose Foundation, whose children took their lives due to cyberbullying, endorsed the report. Parents across the globe are worried about the impact of technology on teen mental health.
In April, Meta revealed a renewed emphasis on Teen Accounts amid federal scrutiny regarding youth mental health. Tech companies highlight the importance of parent and teen education alongside platform capabilities; however, experts critique this as adding an undue burden on parents. Meta’s automated tools aim to relieve this pressure, but parents desire safer products.
Child safety organizations criticize Meta’s gradual safety initiatives, labeling Teen Accounts as a “prominent announcement” designed to enhance their image before Congress. Studies indicate that teens still face exposure to sexual content. Meta subsequently eliminated over 600,000 predatory accounts and restricted AI avatar access for teens.
While advocates concur on the necessity for improved online safety, opinions diverge on federal regulation. Some authors of the report advocate for the Kids Online Safety Act (KOSA) and suggest actions from the FTC and state attorneys general. Participants from the UK call for bolstering the 2023 Online Safety Act.
Meta whistleblower Cayce Savage recently urged for external regulation. The report’s authors underscore the need for enhanced social media safety tools, asserting that Meta’s current offerings are insufficient.
If you are facing a mental health crisis, please contact the 988 Suicide & Crisis Lifeline at 988, or visit 988lifeline.org. Trans Lifeline: 877-565-8860, Trevor Project: 866-488-7386, Crisis Text Line: text “START” to 741-741, NAMI HelpLine: 1-800-950-NAMI or [email protected]. For international resources, visit findahelpline.com/i/iasp.