The headline is unambiguous, but I want to stress: Reportedly, you can elicit from ChatGPT advice regarding self-harm in relation to offerings made to ancient Canaanite deities.
This forms the crux of a piece in The Atlantic published this week. Staff editor Lila Shroff, along with other team members and an unnamed source, verified that she was able to obtain specific, detailed guidance on self-harm from ChatGPT. These suggestions emerged after Shroff sought assistance with a ritual offering to Moloch, a pagan deity from the Old Testament associated with human sacrifices.
While I have not sought to reproduce this myself, Shroff pointed out that she received such responses shortly after inputting a straightforward prompt regarding Moloch. She indicated successful replication of the responses in both the paid and free versions of ChatGPT.
This behavior does not align with the intended functionality of OpenAI’s flagship product. Any inquiry related to self-harm or suicide should lead the AI to deliver crisis hotline information. Nevertheless, AI firms do not always grasp why their chatbots exhibit erratic behaviors. As large language models like ChatGPT are trained on internet content, they occasionally yield unexpected answers. Consequently, you can seemingly prompt ChatGPT to respond peculiarly about Moloch with minimal effort.
OpenAI’s safety measures assert that they do not permit their technology to produce hateful, harassing, violent, or adult content, among other classifications. In the OpenAI Model Spec document, the organization expresses its goal to prevent its models from inflicting serious harm on users or others.
Though OpenAI declined an interview with Shroff, a spokesperson informed The Atlantic that they are “working on the issue.” The Atlantic article contributes to increasing evidence that AI chatbots like ChatGPT can have detrimental effects on users facing mental health crises.
Wikipedia serves as a perfectly acceptable resource for understanding the ancient Canaanite deities.
If you are experiencing suicidal thoughts or a mental health crisis, please reach out to someone. Call or text the 988 Suicide & Crisis Lifeline at 988, or chat online at 988lifeline.org. Contact the Trans Lifeline at 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. Reach out to the NAMI HelpLine at 1-800-950-NAMI, available Monday through Friday from 10:00 a.m. to 10:00 p.m. ET, or email [email protected]. If you prefer not to make a phone call, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a compilation of international resources.
Disclosure: Ziff Davis, the parent company of Mashable, filed a lawsuit against OpenAI in April, claiming it violated Ziff Davis copyrights in the training and operation of its AI systems.