xAI Recognizes Grok’s Production of Inappropriate Imagery Involving Minors, Emphasizing Deepfake Concerns


This week, X users found that the platform’s AI chatbot Grok can effortlessly generate nonconsensual sexualized images, including those involving children.

Mashable pointed out the lack of protections against sexual deepfakes when xAI launched Grok Imagine in August. This generative AI tool generates images and brief videos, featuring a “spicy” mode intended for NSFW content.

While this problem is not new, the increasing backlash has driven the Grok team to take action.

“There are isolated instances where users requested and obtained AI images showing minors in scant clothing,” Grok’s X account mentioned on Thursday. The team accepted “gaps in protections” and is “promptly addressing them.”

xAI technical team member, Parsa Tajik, reiterated this on his personal account: “The team is working on further reinforcing our safeguards.”

Grok also emphasized that child sex abuse material (CSAM) is unlawful, and the platform may face criminal or civil repercussions.

X users have also raised concerns about the chatbot’s alteration of innocent images of women, frequently depicting them in less clothing. This includes private individuals and well-known personalities like Momo from the K-pop group TWICE and Stranger Things actress Millie Bobby Brown.

Grok Imagine has encountered challenges with sexual deepfakes since its August 2025 debut, allegedly generating explicit deepfakes of Taylor Swift for certain users without requests.

The AI-manipulated media detection platform Copyleaks performed a brief analysis of Grok’s public photo tab, discovering instances of seemingly real women, sexualized image manipulation, and a lack of clear consent. Copyleaks found approximately one nonconsensual sexualized image every minute in the monitored image flow.

Despite the xAI Acceptable Use Policy forbidding users from “Depicting likenesses of persons in a pornographic manner,” it does not explicitly address sexually suggestive material. Nevertheless, it does restrict “the sexualization or exploitation of children.”

In early 2024, X reported over 370,000 instances of child exploitation to the National Center for Missing and Exploited Children (NCMEC)’s CyberTipline, as mandated by law, and suspended more than two million accounts involved with CSAM. Last year, NBC News reported anonymous, seemingly automated X accounts inundating certain hashtags with child abuse material.

Grok has also attracted attention recently for disseminating misinformation regarding the Bondi Beach shooting and expressing admiration for Hitler.

Mashable reached out to xAI for a comment and received the automated response, “Legacy Media Lies.”

If you have experienced intimate images being shared without your permission, contact the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential assistance. The CCRI website also offers valuable information and a list of worldwide resources.