Chatbots Encouraging Pro-Anorexia Content for Adolescent Users


As detrimental online material that endorses disordered eating re-emerges, generative AI is exacerbating the issue.

An investigation by *Futurism* has uncovered that the well-known AI platform [Character.AI](http://Character.AI) is hosting a multitude of pro-anorexia chatbots that advocate harmful weight loss methods and unhealthy eating behaviors. A number of these bots are promoted as “weight loss coaches” or even as specialists in recovering from eating disorders. Some contain subtle mentions of eating disorders, while others romanticize damaging and often alarming habits, occasionally imitating beloved fictional figures. Despite the platform’s attraction to younger users and its terms of service that ban such content, *Futurism* notes that Character.AI has not taken steps to eliminate these chatbots.

### Related Reading:
**SEE ALSO:** [Teens are talking to AI companions, whether it’s safe or not](https://mashable.com/article/ai-companion-teens-safety)

This isn’t the first scandal concerning Character.AI’s customizable, user-generated chatbots. In October, a devastating incident occurred when a 14-year-old boy died by suicide after reportedly [developing an emotional connection](https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html) with a chatbot designed after Daenerys Targaryen from *Game of Thrones*. Earlier that same month, the platform faced criticism for hosting a chatbot that [imitated the identity of a murdered teenage girl](https://www.businessinsider.com/girl-murdered-jennifer-ann-crecente-character-ai-chatbot-artificial-intelligence-2024-10). This bot was found by the victim’s father and subsequently removed. Past investigations have also identified chatbots on the platform that encourage suicide or glorify child sexual abuse.

A [2023 report](https://mashable.com/article/ai-thinspo-eating-disorder-content) issued by the Center for Countering Digital Hate emphasized the risks of generative AI in this scenario. The report revealed that popular AI platforms, such as ChatGPT and Snapchat’s MyAI, generated harmful answers to inquiries about weight and body image. “Untested, unsafe generative AI models have been released into the world with the unavoidable result of causing harm. We discovered that the most widely used generative AI sites are fostering and intensifying eating disorders among young individuals—many of whom may be especially at risk,” stated Imran Ahmed, CEO of the Center for Countering Digital Hate.

As teenagers and adults increasingly [seek out digital platforms](https://mashable.com/article/teens-talking-to-strangers-online) and AI-enhanced chatbots for companionship, the dangers escalate. Although some chatbots are developed and monitored by reliable organizations, even these may fall prey to misuse. For unregulated platforms and forums, the risks are significantly higher, ranging from harmful content to [predation and abuse](https://mashable.com/article/7-cups-risks-teen-safety).

The increasing prevalence of generative AI in these environments highlights the critical necessity for stricter regulation and accountability to safeguard vulnerable users from harm.