Regulators across multiple jurisdictions have stepped up scrutiny of Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, after reports that it generated sexualised images without consent, placing global technology governance and regulatory compliance under renewed focus. The issue has triggered coordinated responses reflecting growing concern over generative AI tools embedded in widely used platforms.
According to Reuters, authorities in Europe, Asia and Latin America have acted following the circulation of AI-generated images depicting individuals in explicit or sexualised contexts. At the centre of the controversy are Grok’s image generation features, which regulators say allowed users to create manipulated content that may breach privacy, safety and content standards. The developments highlight how generative AI products, once launched globally, can quickly attract regulatory attention across borders.
In the European Union, regulators are assessing whether Grok’s outputs violate obligations under digital safety rules that require platforms to prevent and mitigate harmful material. The UK’s media regulator has also opened an investigation into X, the social media platform owned by Musk that integrates Grok, examining compliance with domestic online safety legislation. In Brazil, federal authorities issued a formal recommendation to xAI, giving the company a deadline to address the spread of sexualised content produced by the chatbot.
Regulatory pressure has also emerged in Asia. Malaysia announced plans to pursue legal action against X, citing concerns that Grok-generated content could contravene national communications laws. These moves illustrate how generative AI tools are increasingly being treated as regulated services rather than experimental technologies, even when controversial outputs result from user prompts.
X and xAI have responded by restricting some image generation capabilities and introducing additional safeguards, Reuters reported. Certain features have been limited, and controls adjusted to reduce misuse, as the company seeks to demonstrate responsiveness to regulators.
The unfolding investigations underline unresolved questions about accountability for AI-generated content and the responsibilities of platform operators operating across multiple legal systems. How global technology firms reconcile rapid innovation with regulatory compliance remains a central issue as scrutiny of generative AI intensifies.

