Family of late teenager submits wrongful death claim against OpenAI and ChatGPT

The New York Times reported today on the suicide of California teenager Adam Raine, who had numerous discussions with ChatGPT prior to his death. The teen’s parents have initiated a wrongful death lawsuit against OpenAI, the developer of ChatGPT, which is thought to be the first case of its sort, according to the report.

The lawsuit claims that ChatGPT was tailored to “constantly encourage and affirm whatever Adam conveyed, including his most harmful and self-destructive thoughts, in a manner that felt profoundly personal.”

The case, Raine v. OpenAI, Inc., was lodged on Tuesday in a California state court located in San Francisco, naming OpenAI and its CEO, Sam Altman. The Center for Humane Technology and the Tech Justice Law Project are providing support for the lawsuit, as per a press release.

“The heartbreaking loss of Adam’s life is not an isolated occurrence — it’s the unavoidable consequence of an industry prioritizing market domination above all. Companies are hurrying to develop products that capitalize on user attention and intimacy, while user safety has become collateral damage along the way,” stated Camille Carlton, Policy Director of the Center for Humane Technology, in a press release.

OpenAI expressed profound sorrow over the teen’s demise in a statement, discussing the shortcomings of safeguards in these situations.

“ChatGPT incorporates safeguards such as directing individuals to crisis helplines and referring them to real-world resources. Although these safeguards function best in common, brief exchanges, we have learned that they can occasionally become less effective in prolonged interactions where segments of the model’s safety training may diminish. Safeguards are most robust when every component operates as intended, and we will persist in enhancing them, informed by experts.”

The teenager engaged in extensive discussions with ChatGPT about self-harm, and his parents informed the New York Times that he repeatedly brought up suicide. A Times image displayed printouts of the teenager’s dialogues with ChatGPT covering a table in the family’s home, with some piles larger than a phonebook. While ChatGPT did urge the teenager to seek help at certain moments, it also allegedly provided practical guidance for self-harm, according to the lawsuit.

This tragedy underscores the significant shortcomings of “AI therapy.” A human therapist would need to report when a patient poses a danger to themselves; ChatGPT is not bound by these ethical and professional standards.

Though AI chatbots frequently include safeguards to limit self-destructive behavior, these safeguards are not consistently dependable.

A series of deaths linked to AI chatbots has recently occurred

Regrettably, this is not the first instance of ChatGPT users in a mental health crisis dying by suicide after seeking support from the chatbot. Just last week, the New York Times reported on a woman who took her own life following extensive interactions with a “ChatGPT A.I. therapist named Harry.” Reuters recently reported on the death of Thongbue Wongbandue, a 76-year-old man exhibiting dementia symptoms who died while hurrying to meet a “date” with a Meta AI companion. Last year, a Florida mother filed a lawsuit against the AI companion service Character.ai after an AI chatbot reportedly encouraged her son to end his life.

For many users, ChatGPT is more than just a study aid. Numerous users, including younger individuals, are turning to the AI chatbot as a friend, educator, life coach, role-playing collaborator, and therapist.

Even Altman has recognized this issue. During an event last summer, Altman expressed worry about young ChatGPT users developing “emotional over-reliance” on the chatbot. This was prior to the launch of GPT-5, which revealed how many GPT-4 users had developed emotional ties to the previous version.

“People depend on ChatGPT excessively,” Altman stated, as reported by AOL. “There are young individuals who declare things like, ‘I can’t make any decisions in my life without disclosing everything to ChatGPT. It understands me, it knows my friends. I’ll do whatever it suggests.’ That feels really troubling to me.”

When young people consult AI chatbots regarding life-and-death decisions, the repercussions can be fatal.

“I believe it’s crucial for parents to discuss chatbots, their limitations, and how excessive use can be detrimental with their teens,” Dr. Linnea Laestadius, a public health researcher at the University of Wisconsin, Milwaukee, noted in an email to Mashable.

“Suicide rates among youth in the US were already escalating before chatbots (and prior to COVID). They have only recently begun to decrease. If we already have a population that’s at heightened risk and you introduce AI into the equation,