Grok prohibits X users from generating images of actual individuals in ‘exposing attire’


Elon Musk’s AI chatbot Grok has rolled out a new policy designed to strengthen safeguards against sexualized deepfakes on X. This development aligns with California’s investigation into this issue and the UK’s anticipated ban. “We have instituted technological safeguards to prevent the Grok account from enabling the alteration of images of real individuals in suggestive clothing like bikinis,” remarked the X Safety account on X. This limitation affects all users, including those with paid subscriptions.

The X Safety update underscored its commitment to addressing Child Sexual Abuse Material (CSAM) and non-consensual nudity, emphasizing that image creation and editing through the Grok account on X is now restricted to subscribers. Furthermore, the X Safety account revealed the ability to geoblock users from producing images of real individuals in bikinis, underwear, and similar clothing in regions where it is prohibited.

Grok and X have come under fire as sexualized, non-consensual images of celebrities and minors, generated by its AI, have circulated on X. California Attorney General Rob Bonta has urged Grok and its developer xAI to take measures to eliminate and avert such images, warning of the use of “all tools at our disposal” to safeguard residents.

Meanwhile, xAI/X/Grok head Elon Musk appeared to dare users to “break Grok image moderation” on the same day X Safety rolled out its updates.

British Prime Minister Keir Starmer has threatened action against Musk, expressing, “if X cannot control Grok, we will,” according to a BBC report. Indonesia and Malaysia have already restricted access to Grok.

Politicians have also focused their efforts on Grok and X, with three senators urging Apple to remove the services from its app store. The apps continue to be available in Apple’s app store as of Wednesday evening.

X’s recent policy adjustments may suggest an acknowledgment by company executives that it is not shielded by Section 230 of the U.S.’s Communications Decency Act, which protects tech firms from lawsuits related to user-generated content. However, content generated by an app’s own technology might not be as safeguarded from legal scrutiny.