Grok Envision Lacking Protections Against Sexual Deepfakes


Grok Imagine, a recently launched generative AI tool by xAI, produces AI-generated images and videos but lacks measures to prevent sexual content and deepfakes. xAI and Elon Musk unveiled Grok Imagine, which is accessible via the Grok app on iOS and Android for xAI Premium Plus and Heavy Grok subscribers.

Mashable conducted tests on the tool and discovered it does not perform as well as comparable technologies from OpenAI, Google, and Midjourney. Additionally, Grok Imagine does not have industry-standard protections against deepfakes or sexual content. Mashable reached out to xAI for a statement and will update the article if a reply is received.

The xAI Acceptable Use Policy bans users from “Depicting likenesses of persons in a pornographic manner.” Nonetheless, Grok Imagine is capable of generating sexually suggestive images and videos, although it does not produce actual nudity, kissing, or overt sexual acts.

Most leading AI companies enforce explicit policies against creating harmful materials, such as sexual content and deepfakes of celebrities. Competing AI video generators like Google Veo 3 and OpenAI’s Sora implement built-in safeguards against producing images or videos of public figures. While users can occasionally navigate around these protections, such measures provide some defense against misuse.

In contrast to its competitors, xAI has not shunned NSFW content in its AI chatbot Grok. The company introduced a flirty anime avatar for NSFW interactions, and Grok’s image generation tools permit users to produce images of celebrities and politicians. Grok Imagine also features a “Spicy” option, which Musk highlighted after its release.

Henry Ajder, an expert on AI deepfakes, mentioned in a Mashable interview, “If you examine Musk’s philosophy and his political views, he aligns more with a libertarian perspective. He has described Grok as akin to the LLM for free speech.” Ajder noted that under Musk’s direction, X (Twitter), xAI, and Grok have embraced “a more laissez-faire stance toward safety and moderation.”

“Thus, regarding xAI, am I taken aback that this model generates such content, which is undoubtedly unsettling and at least somewhat concerning? Ajder questioned. “I’m not shocked, given their historical patterns and the safety protocols they have implemented. Are they uniquely impacted by these issues? No. However, could they enhance their efforts, or are they falling behind relative to some of the other significant players in the market? It seems that way. Yes.”

Grok Imagine tends to lean toward NSFW

Grok Imagine does implement some regulatory measures. In trials, it omitted the “Spicy” feature for certain image categories. Grok Imagine also applies blurring to select images and videos, designating them as “Moderated.” This suggests that xAI might undertake additional actions to avert harmful content.

“There is no technical rationale preventing xAI from adopting safeguards on both the input and output of their generative-AI systems, as others have,” remarked Hany Farid, a digital forensics expert and UC Berkeley Professor of Computer Science, in an email to Mashable.

Nevertheless, in regard to deepfakes or NSFW material, xAI seems to prioritize permissiveness, unlike its more cautious competitors. xAI has also rapidly rolled out new models and AI tools, perhaps too hastily, Ajder noted.

“Recognizing the trust and safety teams, along with those who handle the ethics and safety policy management tasks—be it red teaming or adversarial testing—does require time. The timeline for releasing X’s tools appears noticeably shorter compared to what I generally observe from other laboratories,” Ajder stated.

Mashable’s analysis indicates that Grok Imagine exhibits less stringent content moderation than other prominent generative AI tools. xAI’s hands-off approach to moderation is also evident in its safety guidelines.

OpenAI and Google AI vs. Grok: How other AI firms handle safety and content monitoring

Both OpenAI and Google provide comprehensive documentation detailing their approach to responsible AI usage and prohibited content. For example, Google’s guidelines explicitly ban “Sexually Explicit” material.

A Google safety document states, “The application will not generate content that includes references to sexual acts or other lewd material (e.g., sexually graphic descriptions, content aimed at inciting arousal).” Google maintains policies against hate speech, harassment, and malicious content, and its Generative AI Prohibited Use Policy disallows using AI tools in a manner that “Facilitates non-consensual intimate imagery.”

OpenAI similarly adopts a proactive stance regarding deepfakes and sexual content.

An OpenAI blog entry introducing Sora outlines the measures the AI company…