
Grok access has been halted for users in Indonesia and Malaysia due to worries regarding the xAI chatbot’s inadequate protective measures. Both nations have enacted temporary suspensions until xAI establishes safeguards that comply with regulatory standards. Indonesia’s communications and digital affairs minister, Meutya Hafid, highlighted the government’s position against non-consensual sexual deepfakes, labeling them as significant infringements of human rights and digital security. Indonesia imposes stringent internet censorship laws on content regarded as “obscene.”
Malaysia has recently begun an investigation into the misapplication of AI tools on the X platform, following comparable actions by regulators. This investigation was initiated after a notice from the Indian IT ministry advocating for immediate intervention regarding Grok’s purported misuse, mentioning possible breaches of the Information Technology Act.
French authorities, along with the UK and a European Union inquiry, have launched investigations into xAI’s technology regarding online safety legislation. Australian Prime Minister Anthony Albanese commented on Grok’s deepfake concerns, reiterating the social media prohibition for users under 16 years old. The U.S.-based National Center on Sexual Exploitation (NCOSE) has called upon the Department of Justice and the Federal Trade Commission to probe X under CSAM regulations and the Take It Down Act.
UK technology secretary Liz Kendall voiced support for blocking X if Ofcom determines that the platform violates the Online Safety Act, with an announcement expected shortly. Elon Musk, who advocates for repercussions against illegal content uploaded on X, accused the UK government of being excessively quick to censor the company, alleging that they seek to stifle free speech.
A Wired investigation disclosed that Grok Imagine could generate sexually violent and graphic content, including AI-generated CSAM, despite its safeguards. The chatbot has a recorded history of producing sexualized deepfakes, frequently at the request of users to “undress” individuals in images.