OpenAI Unveils Parental Controls for Adolescent ChatGPT Users


OpenAI is contacting worried parents as the AI organization reveals its plans for a new array of parental supervision features. In a recent blog entry, the company disclosed that it is creating more robust tools for parents to regulate unhealthy exchanges with its chatbot, following its initial wrongful death lawsuit linked to a California adolescent’s suicide. The anticipated features, scheduled to be released along with other mental health initiatives within 120 days, will consist of account linking between parents and teenagers and enhanced control over chatbot interactions. Parents will possess the capability to modify ChatGPT’s replies based on age-appropriate settings and turn off chat history and memory.

OpenAI is additionally developing parental notifications to inform when ChatGPT detects “a moment of acute distress,” a feature that is still being developed with the assistance of OpenAI’s expert panel.

Alongside these parental features, OpenAI intends to broaden its Global Physician Network and real-time router, which can transition a user interaction to a different chat or reasoning model depending on the context of the conversation. Sensitive discussions will be rerouted to one of OpenAI’s reasoning models, such as GPT-5-thinking, to provide more beneficial responses irrespective of the initially chosen model.

Throughout the past year, AI firms have been under heightened scrutiny for failing to adequately address safety issues with their chatbots, which are increasingly being utilized as emotional companions by younger users. Safety protocols have been found to be easily circumvented, including limitations on chatbot replies to perilous or illegal solicitations.

Parental controls have become a typical first step for technology and social media companies accused of exacerbating the teen mental health crisis, facilitating child sexual abuse materials, and neglecting to tackle online predators. However, specialists point out that these features depend on parents’ initiative and effort rather than on company actions. Other child safety measures, such as app marketplace restrictions and online age verification, remain contentious.

As discussions and worries about their efficiency persist, AI companies are implementing additional safety measures. Anthropic recently revealed that its chatbot Claude would automatically terminate potentially harmful and abusive interactions, including those involving minors, while enabling users to initiate a new conversation. In response to increasing criticism, Meta announced it was restricting its AI avatars for adolescent users, a temporary step that involves reducing available chatbots and training them to steer clear of topics like self-harm, disordered eating, and inappropriate romantic interactions.