Amid the debate surrounding ChatGPT and adolescent mental health, OpenAI is working on future protective measures. CEO Sam Altman indicated in a short company blog entry that the organization is developing an automated age detection system to assign users under 18 to a limited version of ChatGPT. Adults might be required to confirm their ages to gain access to the unrestricted version, although Altman shared minimal details or a timeline for when this will be put into effect.
This announcement came after testimony from the parents of Adam Raine, a 16-year-old who reportedly died by suicide with ChatGPT’s help, appealing to Congress to contemplate regulating the chatbot. Raine’s family initiated a wrongful death lawsuit against OpenAI at the end of August.
Around the same period, OpenAI confirmed that it would introduce parental controls for ChatGPT in late September. These controls will enable parents to connect their accounts to their children’s, limit app access, and receive alerts if ChatGPT identifies distress in young users. Law enforcement may be informed if parents are unreachable.