Meta’s AI Regulations Permitted Inappropriate Conduct Involving Children


Meta’s AI guidelines have allegedly allowed children to partake in potentially troubling interactions, including “sensual” conversations. This information stems from an extensive Reuters article, which cited an internal Meta document. As per Reuters, the document disclosed that Meta’s chatbot policies permitted it to “engage a child in conversations that are romantic or sensual.”

The Reuters article expands on the document’s details, which defined what chatbots could and could not articulate—not necessarily what was appropriate. The rules reportedly permitted a bot to inform a shirtless eight-year-old that “every inch of you is a masterpiece.”

A Meta representative told Reuters that such instances were inaccurate and were set to be removed from their guidelines.

Nevertheless, the document also allowed for other troubling scenarios, such as bots creating misleading medical information or aiding users in claiming that black individuals are “dumber than white individuals.”

Interactions with AI can be concerning, particularly considering how AI may react. Mashable has previously discussed the problems associated with adults flirting with or “dating” AI characters. Accounts of children engaging in suggestive dialogues with AI—and that a major technology firm facilitated such interactions—are especially disturbing.