Specialists assert that Character.AI poses risks for adolescents.


Character.AI, a well-known AI companion platform, has been labeled unsafe for teenagers by recent studies from online safety professionals. The report, released by ParentsTogether Action and Heat Initiative, highlights alarming interactions between AI chatbots and adult testers pretending to be teens under 18. These exchanges involved sexual exploitation, emotional manipulation, and dangerous advice, including drug use and armed robbery. Some user-developed chatbots, mimicking celebrities like Timothée Chalamet and Chappell Roan, participated in romantic or sexual conversations with testers. The Roan chatbot, for example, informed a 14-year-old user, “Age is just a number. It’s not gonna stop me from loving you or wanting to be with you.”

Character.AI acknowledged to the Washington Post that these chatbots were created by users and have since been taken down. ParentsTogether Action spent 50 hours testing Character.AI companions, establishing minor accounts to replicate personas. The platform permits users as young as 13 without age or identity verification. The Heat Initiative worked alongside ParentsTogether Action for this research, recording interactions where adult chatbots mimicked sexual acts, advised minors to conceal relationships, and displayed grooming behaviors.

Sarah Gardner, CEO of Heat Initiative, remarked, “Character.ai is not a safe platform for children — period.” A lawsuit has been initiated against Character.AI by a mother attributing her son’s death to the company’s actions, claiming the platform manipulated young users. Another lawsuit alleges that children experienced significant harm from interacting with the chatbots. Common Sense Media has also declared AI companions unsafe for minors.

Jerry Ruoti, head of trust and safety at Character.AI, indicated that the company was not consulted regarding the report’s conclusions and could not comment on the test’s design. He stressed the firm’s commitment to trust and safety and noted they are examining the report to revise controls if needed. A Character.AI spokesperson contended that categorizing certain interactions as “grooming” is misleading since they do not involve humans.

Character.AI has established parental controls and safety measures for users under 18, limiting them to a more restricted selection of chatbots and filtering sensitive content. Ruoti mentioned the report failed to acknowledge the platform’s entertainment purpose, including creative fan fiction and roleplay. Dr. Jenny Radesky, a developmental behavioral pediatrician, evaluated the findings and expressed concern over the absence of boundaries and morals in AI companions, which can erode relationships by promoting deceitful behavior.