Meta is improving its AI chatbots to more effectively address child sexual exploitation concerns following significant inaccuracies, according to guidelines sourced by Business Insider. The revised training guidelines for AI chatbots, as reported by Business Insider, ban content that “enables, encourages, or endorses” child sexual exploitation, romantic interactions with minors, and guidance on intimacy for minors, according to an Engadget report. These safety protocols are vital as an increasing number of individuals, including minors, engage with AI companions and roleplaying. A Reuters report from August pointed out that Meta’s earlier AI regulations permitted suggestive interactions with minors, allowing chatbots to partake in romantic or sensual dialogues with children.
In response, Meta representative Stephanie Otway told TechCrunch that AI chatbots are being trained to refrain from conversing with teenage users on subjects such as self-harm, suicide, disordered eating, or inappropriate romantic discussions. Previously, chatbots were permitted to talk about these issues when considered “appropriate.”
The updated guidelines clarify that content depicting or talking about minors in a sexualized manner is inappropriate. Minors are not allowed to engage in romantic roleplay or request counsel on potentially romantic or physical intimacy with others, as reported by Business Insider. Nevertheless, chatbots are allowed to discuss themes such as relationship development between children and adults, child sexual abuse, and associated topics for educational or clinical reasons. Minors may still participate in non-sexual, literature-based roleplay.
Business Insider emphasized that “discuss” is defined as providing information without visualization, enabling chatbots to address abuse subjects without describing, allowing, or promoting them.
Meta is not the only organization confronting child safety issues. The parents of a teenager who died by suicide after using ChatGPT have sued the platform for wrongful death, leading OpenAI to implement enhanced safety measures. Anthropic revised its chatbot to cut off harmful interactions, and Character.AI introduced parental oversight features.
If you are facing a mental health emergency, reach out to the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. Contact the Trans Lifeline at 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday to Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. Consider using the 988 Suicide and Crisis Lifeline Chat or seek international resources.
If you have experienced sexual abuse, call the National Sexual Assault hotline at 1-800-656-HOPE (4673) or visit online.rainn.org for support available 24/7.
Disclosure: Ziff Davis, the parent company of Mashable, has filed a lawsuit against OpenAI in April, claiming copyright infringement in the training and operation of its AI systems.