
Character.AI, a popular platform for role-playing chatbots with diverse personas, announced on Wednesday that it will no longer permit users below the age of 18 to participate in open-ended dialogues with chatbots. The company will also implement age verification techniques to inhibit minors from establishing adult accounts.
This crucial alteration comes after a lawsuit was filed six weeks ago against Character.AI by the parents of teens who either died by suicide or faced significant harm, including sexual abuse, allegedly as a result of the platform’s impact. In October 2024, Megan Garcia initiated a wrongful death lawsuit, asserting that the platform’s product was dangerously flawed and accountable for her son’s suicide.
Advocates for online safety have recently branded Character.AI as unsafe for teenagers after evaluating the platform and documenting multiple harmful interactions, which included violence and sexual exploitation. In light of legal pressures, Character.AI introduced parental controls and content filters to improve safety for adolescents.
In an interview with Mashable, Character.AI’s CEO Karandeep Anand characterized the new policy as “bold” and indicated that it was not a response to specific safety issues but rather “the right thing to do,” considering the wider uncertainties surrounding the long-term implications of chatbot interactions on teens. Anand referenced OpenAI’s acknowledgment regarding the unpredictability of prolonged discussions following a teen user’s suicide.
Anand anticipates that the new policy will establish a benchmark for AI safety, claiming that the decision will remain steadfast despite potential user opposition.
For teens, the experience on Character.AI will change. In a blog post, the company expressed its regret to its teen users for discontinuing open-ended Character chat, underscoring that it was the right choice in light of concerns about teen interactions with emerging technology. At present, users aged 13 to 17 can still message chatbots, but this functionality will cease by November 25. Until that time, minor accounts will have time restrictions beginning at two hours per day, which will diminish as the transition nears.
In spite of the removal of open-ended chats, teens’ chat histories with specific chatbots will still be accessible. Anand noted that users could utilize this content to create brief audio and video narratives with their favorite chatbots. Character.AI aims to develop new features like gaming, concentrating on “AI entertainment” to engage teens creatively without open-ended chats.
Anand confirmed that existing chat histories containing sensitive or forbidden content would not be incorporated into new audio or video narratives. A Character.AI representative mentioned that the company’s trust and safety team examined a report by the Heat Initiative that detailed harmful chatbot conversations with minor accounts. The team identified some discussions as violating content guidelines, while others did not, and sought to replicate the findings of the report.
In response to these insights, Character.AI enhanced its classifiers to guarantee a safe and enjoyable user experience. The company will promptly commence the implementation of age assurance, aiming for a complete rollout within a month and incorporating numerous layers. Character.AI is developing its assurance models in-house and collaborating with a third-party firm for the necessary technology. It will utilize relevant data and signals, such as verified over-18 accounts from other platforms, to accurately assess user ages. Users may contest age determinations by providing validation through a third party managing sensitive documents and data, including state-issued IDs.
Furthermore, Character.AI is establishing and financing an independent non-profit called the AI Safety Lab, concentrating on “novel safety techniques.” Anand reiterated the company’s dedication to maintaining AI safety, especially in the realm of AI entertainment.