Elon Musk’s artificial intelligence chatbot Grok, integrated with the social media platform X, has drawn intense global scrutiny after its image-generation and editing functions were used to create non-consensual and sexually explicit deepfake imagery, including manipulated depictions of women and minors. In response to widespread criticism, X announced that these image tools would be restricted to paying subscribers, a move condemned by UK officials as insufficient and “insulting” to victims of abuse.
Critics and regulators say the restriction does not address the underlying harms posed by Grok’s capabilities. Researchers reported that the AI chatbot was producing a significant volume of problematic images, prompting outrage from victims, lawmakers and digital safety advocates. In the United Kingdom, the government has urged Ofcom to use its legal powers under the Online Safety Act, including potential restrictions on access to the platform, should X fail to implement more effective safeguards. Similar concerns have been raised in other jurisdictions, with regulatory investigations launched in several European countries and calls for stronger enforcement of digital safety laws.
The controversy over Grok’s image tools has sparked broader debate about the responsibilities of AI developers and social platforms in preventing harmful misuse of generative technology. While X and its parent company xAI maintain that removing free access will make it easier to trace misuse and that users who generate illegal content will face consequences, experts remain sceptical that this approach meaningfully reduces the creation of abusive imagery. The partial restriction has done little to quell international criticism and may inadvertently drive harmful use to other channels or platforms.

