Meta Enhances Safety Regulations for Teens Across Instagram, Facebook, and Messenger
Teenagers utilizing Instagram will now require parental permission before they can go Live, as Meta enhances its safety protocols for young users across all platforms. This modification is a part of a wider rollout of Meta’s Teen Accounts, which are being introduced to Facebook and Messenger as well.
Alongside the limitation on Live streaming for users younger than 16, Meta is mandating parental consent for teens wishing to turn off content moderation filters that obscure images with suspected nudity in direct messages. These updates are an extension of a range of safety features launched last year aimed at safeguarding younger individuals online.
Parents can adjust these configurations through Meta’s Family Center, which provides oversight tools for Teen Accounts across Instagram, Facebook, and Messenger.
“Teen Accounts have emerged as our flagship offering for young users,” stated Tara Hopkins, Instagram’s global director of public policy. “Every initiative developed by our youth teams aligns with our Best Interests of the Child Framework. We are integrating all youth safety settings under Teen Accounts, particularly those that require parental supervision.”
Since the initial introduction, over 54 million teenage users have been migrated to Teen Accounts. Meta reports that 97% of users under 16 have retained the standard safety configurations. Teens aged 13 to 15 encounter stricter regulations, including compulsory parental approval for any modifications to account settings. Users aged 16 and above enjoy greater freedom to tailor their preferences.
Meta initially introduced Teen Accounts on Instagram in September as part of a larger effort to centralize features aimed at enhancing teen safety. These accounts are private by default, restrict messaging functions, and incorporate tools for managing screen time. Advertising targeted at teen users is also limited. New teen users are automatically assigned to Teen Accounts, while existing users are transitioning at a gradual pace.
However, identifying and converting all eligible accounts proves to be a challenge. Meta is creating AI-driven tools to identify underage users who might have evaded the system or provided false birthdates. This initiative falls under what Hopkins refers to as a “precautionary principle” intended to alleviate the responsibility on parents who have had to closely monitor their children’s online engagement.
The effort for stronger youth protections arises amid ongoing scrutiny regarding Meta’s management of harmful and explicit content. The organization has been criticized for allowing children to encounter inappropriate material and faces pressure to enhance its moderation practices.
Despite these initiatives to bolster teen safety, Meta has concurrently scaled back its broader content moderation efforts. In recent months, the company has dismantled its third-party fact-checking team, reduced diversity, equity, and inclusion (DEI) programs, and diluted its policies regarding hateful behavior.
As Meta continues to navigate the fine line between user safety and platform liberty, its Teen Accounts initiative stands as a clear indication of its dedication to protecting younger users — even as general content oversight policies experience cutbacks.