OpenAI Establishes Well-Being Advisory Committee, States Safety Issues Addressed


OpenAI has formed an advisory council to ensure the well-being of users and the safety of AI, as revealed this week. The eight-member group will establish guidelines for healthy AI engagement across various age demographics.

Accompanying the announcement was a post on X by CEO Sam Altman, who stated that the organization has successfully tackled the “serious mental health challenges” related to its offerings. Altman additionally indicated that ChatGPT would start permitting more adult content, including erotica, in conversations. OpenAI is presently dealing with a wrongful death lawsuit, claiming that ChatGPT played a role in a young teenager’s suicide.

The council comprises scholars from Boston Children’s Hospital’s Digital Wellness Lab and Stanford’s Digital Mental Health Clinic, along with specialists in psychology, psychiatry, and human-computer interaction.

“We uphold responsibility for our decisions, yet we’ll keep learning from this council, the Global Physician Network, policymakers, and others, as we innovate advanced AI systems that prioritize people’s well-being,” the organization conveyed in a blog entry.

Last week, YouGov published a survey of 1,500 Americans, showing that merely 11 percent were willing to use AI for mental health enhancement, and only 8 percent had trust in the technology for this purpose.

Generative AI companions have sparked considerable worries among mental health professionals, including the rise of “AI psychosis” among regular users of chatbot companions. Regardless of scant evidence supporting efficacy, AI companies persist in launching mental health products as a growing number of Americans turn to AI for mental health assistance.

Federal authorities are probing the impact of generative AI and chatbot companions on the mental health crisis, especially among adolescents. Numerous states have prohibited AI-driven chatbots marketed as therapeutic aids. Recently, California Governor Gavin Newsom enacted legislation to regulate AI and its societal consequences, including requirements for safety reporting from AI companies and measures to safeguard teenagers from sexual content. The law, SB 243, also mandates that companies address suicidal thoughts, suicide, and self-harm.