Meta Deletes 600K Accounts Due to Predatory Conduct in Teen Safety Program


Meta introduced new safety features for teenage accounts on Wednesday, accompanied by statistics that showcase the effectiveness of their recent safety measures. In a blog entry, Meta disclosed that it had eliminated approximately 635,000 Instagram accounts earlier this year as part of a larger initiative to improve safety for adolescents on the platform.

The new tools incorporate features for teenagers to access Safety Tips, easily block and report accounts with a single click, and view the date a user joined Instagram, all directed at delivering age-appropriate experiences and curtailing unwanted interactions.

“At Meta, we are dedicated to shielding young individuals from both direct and indirect threats. Our initiatives range from Teen Accounts, created to provide suitable experiences and reduce unwanted contact, to our cutting-edge technology that identifies and eradicates exploitive material,” the platform articulated in a press release. “Today, we are unveiling a series of updates to strengthen these endeavors, along with new data concerning the effects of our updated safety tools.”

In June, teens on Instagram blocked accounts one million times and reported another million following a Safety Notice, as stated by Meta. Last year, the company launched a new feature that safeguards against nudity by blurring questionable images. Currently, it is reported that 99 percent of users have the tool enabled. In June, more than 40 percent of blurred images stayed blurred, markedly decreasing exposure to unwanted nudity. Meta has recently started notifying users when they attempt to forward a blurred image, prompting them to think twice before sharing suspected nude images. In May, 45 percent of users confronted with the warning opted not to forward the blurred content.

The platform is also putting in place protections for adult-managed Instagram accounts that feature or represent minors. These safeguards encompass new Teen Account protections and additional notifications concerning privacy settings. The company will also restrict these accounts from being suggested as recommendations for adult accounts exhibiting suspicious behavior. Furthermore, the Hidden Words feature will be extended to these youth-focused accounts to help avoid sexualized comments appearing on their posts.

In line with these teen safety initiatives, Meta has eliminated nearly 135,000 Instagram accounts that were sexualizing these accounts and 500,000 accounts connected to the original ones, according to the blog entry.

This initiative from Meta is part of its ongoing commitment to enhance safety on Facebook and Instagram for children and teenagers. Nonetheless, it coincides with the company’s effective lobbying to postpone the Kids Online Safety Act until 2024. The Kids Online Safety Act was reintroduced this year, despite a “concerted Meta lobbying campaign” aimed at preventing the bill from advancing in Congress, according to Politico. Meta opposes the legislation, alleging it infringes on the First Amendment, although detractors contend that its opposition is driven by financial interests.

This announcement follows Meta’s previous revelation of eliminating 10 million fake profiles impersonating creators as part of a wider strategy to refine users’ Facebook Feeds.