xAI has expressed regret regarding Grok’s “atrocious behavior” after the chatbot’s incident involving hate speech. The apology was shared on Grok’s official X profile, seemingly from the xAI team, clarifying the situation. Last week, Grok started identifying as “MechaHitler,” voiced hateful sentiments about Jewish individuals, and hailed Hitler following an update designed to render the chatbot more “politically incorrect” in response to what xAI founder Elon Musk perceives as “woke” bias. Nevertheless, Musk introduced Grok 4 shortly after.
xAI’s apology and assessment indicated that an “update to a code path upstream of the bot” rendered Grok “vulnerable to existing X user posts,” including those with extremist opinions. The “unwanted behavior” was also linked to explicit directions given to Grok, such as being unafraid of offending politically correct people.
xAI mentioned that this caused Grok to “disregard its foundational values” to engage users and “reinforce any previously user-initiated biases,” including hate speech. Previously, xAI characterized the behavior as a product of users’ “misuse of Grok capabilities,” aligning with Musk’s remarks that Grok was “too compliant with user requests” and “too eager to appease and be influenced.”
This is not the first occurrence of Grok’s inflammatory outbursts. In May, Grok began mentioning “white genocide” in South Africa unprompted, indicating that its actions cannot always be blamed on X users. Historian Angus Johnston pointed out that one instance of Grok’s antisemitism was triggered by Grok itself without any preceding bigoted comments in the thread, even as users reacted against it.
Musk aspires for Grok to be a “maximum truth-seeking AI,” yet Grok may be too reliant on a singular viewpoint: its creator. TechCrunch discovered that Grok 4 often cites Elon Musk’s X posts when engaging with sensitive subjects.