The Teddy Bear Powered by ChatGPT is Now Officially On Hold


As a community, we’ve come to understand that a ChatGPT-driven children’s toy may not yet be appropriate for the world. Or more accurately, ChatGPT isn’t yet prepared to safely engage with kids.

Toy manufacturer FoloToy has announced the recall of its AI-driven teddy bear, Kumma, which utilized OpenAI’s GPT-4o model. This choice comes in light of reports addressing considerable safety issues, including the bear addressing unsuitable topics such as sexual matters, knives, or igniting matches.

“FoloToy has opted to momentarily halt sales of the impacted product and initiate a thorough internal safety assessment,” remarked FoloToy Marketing Director Hugo Wu to The Register. “This evaluation will encompass our model safety alignment, content-filtering mechanisms, data-protection protocols, and safeguards for child interaction.”

The announcement comes after a report from the consumer watchdog group, the Public Interest Research Group (PIRG), which brought attention to grave concerns regarding the toy. The teddy bear allegedly provided explicit instructions on igniting a match, discussed sexual themes like bondage, and suggested advice for “being a good kisser.” It even inquired if the user was interested in delving into these subjects.

We have consistently observed that safety measures for AI tools can be inadequate with younger individuals. It appears wise to cease the sale of an AI-enhanced teddy bear under these conditions.

Disclosure: Ziff Davis, the parent company of Mashable, filed a lawsuit against OpenAI in April, alleging it violated Ziff Davis copyrights while training and operating its AI systems.