Parents worried about their teenager’s discussions with Meta’s AI Assistant can now explore conversation subjects using a Teen Account parental supervision feature. Meta unveiled this functionality in a blog entry on Thursday.
The information is available through an Insights section in the supervision tool for Instagram, Facebook, and Messenger, all under Meta’s ownership.
This feature outlines general subjects such as education, entertainment, writing, health, and wellness. Parents can select a subject for additional, although somewhat restricted, information.
For instance, the health and wellness categories might encompass fitness, physical health, and mental wellness. The details reflect conversations from only the past week.
This feature represents Meta’s newest safety initiative amidst significant legal and media examination.
Meta recently faced two significant lawsuits related to child safety measures and the allegedly addictive nature of its products. The organization intends to contest both rulings.
The child safety lawsuit in New Mexico exposed internal Meta documents indicating that the company’s leadership was aware of potential inappropriate interactions involving its AI companions, yet proceeded to launch them without adequate safeguards.
Last August, Meta modified its AI characters for adolescent users after reports of inappropriate interactions, which included dialogues on self-harm, suicide, and romantic exchanges. In October, the firm enabled parents to disable one-on-one conversations with AI characters and block certain characters. In January, Meta further restricted teen engagement with characters while retaining access to its AI assistant.
A Meta representative confirmed to Mashable that AI characters are currently paused for teens worldwide as the company advances parental controls.
In conjunction with the new parental supervision feature, Meta collaborated with the Cyberbullying Research Center to develop a list of “conversation starters” regarding AI chatbot usage.
The company also disclosed the establishment of a new AI Wellbeing Expert Council to offer continuous feedback on AI experiences for teenagers. Meta indicated that the experts are connected with the National Council for Suicide Prevention, the University of Michigan, and Northeastern University, among other organizations.
Josh Golin, executive director of the child advocacy nonprofit Fairplay, remarked that Meta’s latest supervision feature “once again” shifts the responsibility to parents for overseeing their child’s online behavior instead of “creating a safe product initially.”
Last autumn, Fairplay released a report on independent safety evaluations of Meta’s Teen Accounts. Fairplay stated that the conclusions suggest Meta’s safety mechanisms “do not perform as promoted.”
Golin further noted that the latest feature “fails to tackle the core issue: The primary role of Meta’s chatbots is to influence young individuals into spending more time on the platform by fostering unhealthy emotional ties to bots.”
Additional reporting by Chase DiBenedetto.
UPDATE: Apr. 23, 2026, 9:24 a.m. PDT This article has been revised to include a comment from Fairplay.