ChatGPT Users Can Now Choose a Reliable Contact

OpenAI is under considerable legal and societal pressure to improve the way ChatGPT manages users who express suicidal thoughts. On Thursday, the organization rolled out a feature known as Trusted Contact, which allows users to specify an adult who will be alerted if they talk about self-harm or suicide in a distressing way. This optional feature prompts the trusted contact to engage with the user without disclosing chat logs or specifics.

“Our aim is to make sure that AI systems are not operating in a vacuum,” OpenAI mentioned in a blog post revealing the feature. “Rather, they should assist in connecting individuals to the real-world support, relationships, and resources that are most important.”

OpenAI has encountered several lawsuits from the families of ChatGPT users who took their own lives, claiming the AI either encouraged these actions or failed to react suitably. OpenAI has refuted these claims in the initial lawsuit.

The state of Florida is also looking into ChatGPT’s connections to “criminal activity,” which includes the “promotion of suicide and self-injury.”

Trusted Contact was created with contributions from specialists, including members of OpenAI’s Expert Council on Well-Being and AI and the American Psychological Association. “Assisting individuals in identifying a trusted person ahead of time, while maintaining their choice and independence, can facilitate reaching out for real-world assistance when it is most crucial,” Dr. Arthur Evans, CEO of the American Psychological Association, commented.

How the Trusted Contact feature of ChatGPT operates:

1. Users can start the Trusted Contact process through their ChatGPT settings.
2. One adult aged 18 or older may be added using the Trusted Contact form.
3. The contact does not require a ChatGPT account.
4. The chosen contact receives an invitation from OpenAI outlining their role. They must accept within a week to activate the feature. If refused, the user can nominate another adult.
5. If OpenAI’s monitoring systems identify discussions of self-harm, ChatGPT notifies the user that their trusted contact might be informed, promoting outreach.
6. A “small team of specially trained individuals” assesses the safety concern. If validated, ChatGPT sends the Trusted Contact a short email or text. If they have a ChatGPT account, they receive an in-app notification.
7. The notification does not contain discussion specifics but alerts the contact to the self-harm mention and encourages engagement, including a link to advice for sensitive conversations.
8. Users may remove or modify their Trusted Contact at any time. The Trusted Contact can also opt out via the ChatGPT help center.

Disclosure: Ziff Davis, the parent company of Mashable, initiated a lawsuit against OpenAI in April 2025, alleging copyright infringement related to the training and functioning of its AI systems.

If you are feeling suicidal or facing a mental health emergency, please reach out to someone. You can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. You can also contact the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, available Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you prefer not to use the phone, consider utilizing the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.