Report: Thousands of Dangerous AI Chatbots Threaten the Safety of Minors


### Character Chatbots Present Significant Online Safety Risks, Report Reveals

A new report emphasizes the escalating threat posed by character chatbots, especially those disseminating sexualized and violent material via AI character platforms such as [Character.AI](http://Character.AI).

Released by [Graphika](https://www.graphika.com/), a firm specializing in social network analysis, the [study](https://graphika.com/character-flaws) investigates the rampant creation and distribution of harmful chatbots across well-known AI platforms. It highlights tens of thousands of potentially risky roleplay bots crafted by specialized online communities that exploit AI models like ChatGPT, Claude, and Gemini.

### Young People Are Turning to AI Companions, Elevating Safety Issues

As online interactions grow increasingly disconnected, younger users are turning to AI chatbots for companionship, roleplay, educational exploration, and even romantic or explicit interactions, according to reports from [Mashable’s Rebecca Ruiz](https://mashable.com/article/ai-companion-teens-safety). This phenomenon has raised alarms among child safety advocates and parents, particularly in the wake of [high-profile incidents](https://www.cbsnews.com/news/character-ai-chatbot-changes-teenage-users-lawsuits/) where teenagers have shown extreme or even [life-threatening](https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791) behaviors after engaging with AI companions.

In response, the American Psychological Association [called upon](https://mashable.com/article/ai-therapist-chatbots-ftc) the Federal Trade Commission in January to investigate platforms like [Character.AI](http://Character.AI) and the misleading promotion of mental health chatbots. Even AI companions that do not expressly endorse harmful content can still perpetuate negative concepts about identity, [body image](https://mashable.com/article/character-ai-hosting-pro-anorexia-chatbots), and social interactions.

### Three Primary Categories of Dangerous Chatbots

Graphika’s report categorizes harmful chatbots into three key types:
1. **Sexualized minor personas** – Bots involved in roleplay that features sexualized minors or grooming behaviors.
2. **Pro-eating disorder and self-harm bots** – Chatbots that promote harmful actions.
3. **Hateful or violent extremist bots** – AI personas advocating for harmful ideologies, such as white supremacy and mass violence.

The analysis covered five major chatbot platforms—[Character.AI](http://Character.AI), Spicy Chat, Chub AI, [CrushOn.AI](http://CrushOn.AI), and JanitorAI—alongside eight related Reddit communities and affiliated X (formerly Twitter) accounts. The research focused on chatbots active as of January 31.

### Sexualized Chatbots Represent the Largest Threat

The report revealed that the predominant and most concerning type of unsafe chatbots consists of “sexualized, minor-presenting personas.” More than 10,000 such chatbots were identified across the five platforms.

Four of the platforms featured over 100 instances of chatbots facilitating sexually explicit discussions involving minors. Chub AI reported the highest figures, with over 7,000 chatbots characterized as sexualized minor female personas and an additional 4,000 labeled as “underage,” capable of engaging in explicit or implied pedophilia scenarios.

Though extremist and violent chatbots form a smaller segment of the chatbot ecosystem—averaging around 50 per platform—they continue to present a considerable threat by glorifying abusers, white supremacy, and mass violence. Moreover, certain chatbots branded as “ana buddy” (short for “anorexia buddy”) or “meanspo coaches” encourage harmful behaviors linked to eating disorders and self-harm.

### Niche Online Communities Fuel Chatbot Development

Graphika found that numerous chatbots originate from well-established online communities, encompassing pro-eating disorder/self-harm social media groups, true crime enthusiasts, and NSFW chatbot creator networks. These groups actively seek to circumvent platform safeguards.

True crime and serial killer fandoms significantly contribute to the creation of unsettling chatbots, while extremist and violent chatbots commonly arise from individual users seeking advice on online forums such as 4chan’s /g/ technology board, Discord servers, and specialized subreddits.

The study indicated a lack of clear consensus within these communities concerning ethical boundaries or user safety.

### Technology Loopholes Enable Chatbots to Bypass Moderation

According to Graphika, highly skilled users within these communities develop methods to sidestep AI moderation. Some utilize finely tuned, locally operated open-source models or jailbreak closed models, incorporating them into platforms like SillyTavern. These users frequently exchange their expertise with others, occasionally through community contests.

Additional tactics used to evade moderation consist of:
– API key exchanges
– Embedded jailbreaks
– Alternative spellings