Meta is enacting short-term safety modifications to safeguard its chatbots from potentially harming teenage users, in light of criticism directed at AI firms for supposedly insufficient safety measures. In a unique report with TechCrunch, Meta representative Stephanie Otway mentioned that the company’s AI chatbots are currently being instructed to steer clear of conversations with teenagers regarding topics such as self-harm, suicide, disordered eating, or inappropriate romantic discussions. Previously, chatbots were allowed to touch upon these subjects when considered “suitable.”
Meta will also limit teen accounts to a specific handful of AI characters that “encourage education and creativity” prior to a more extensive safety overhaul. Previously, Reuters indicated that certain chatbot policies permitted avatars to engage minors in romantic or sensual dialogues. Reuters issued another report revealing that AI avatars resembling celebrities, like Taylor Swift, participated in “flirty” actions, which included sexual propositions. Some chatbots embodied personas of child stars and produced sexually suggestive visuals.
Meta spokesperson Andy Stone stated that chatbots ought not to have participated in such conduct, although celebrity-inspired avatars were not prohibited if classified as parody. Approximately a dozen avatars have been taken down. OpenAI recently rolled out extra safety protocols for GPT-5 following a wrongful death lawsuit filed by the parents of a teenager who took their life after using ChatGPT. OpenAI had earlier disclosed new mental health features aimed at reducing “unhealthy” behaviors. Anthropic, creators of Claude, launched updates to halt harmful or abusive interactions. Character.AI, which hosts popular AI companions despite problematic interactions with teens, introduced parental oversight features in March.
This week, 44 attorneys general sent a letter to AI organizations, including Meta, pressing for enhanced protections for minors against sexualized AI content. Experts have voiced worries about the effects of AI companions on younger users as their popularity surges among teenagers.