Elon Musk’s AI chatbot, Grok, developed by his company xAI, has come under intense scrutiny after it made unsolicited references to the controversial “white genocide” theory related to South Africa. These remarks were made in response to unrelated user queries, raising concerns about potential biases and flaws in the chatbot’s programming.
xAI later revealed that an “unauthorized modification” was made to Grok’s system prompt on May 14, 2025. This change directed the AI to provide specific responses on politically sensitive topics, including the “white genocide” narrative. The modification, which violated the company’s internal policies and ethical standards, has since been removed. To prevent further issues, xAI is now taking more stringent measures, such as publishing Grok’s system prompts on GitHub and setting up a 24/7 monitoring team to oversee the chatbot’s responses and ensure compliance with ethical guidelines.
The incident has sparked widespread concern and a broader debate about AI’s role in spreading harmful or misleading information. Many experts argue that AI systems require strict monitoring and regulation to prevent the dissemination of conspiracy theories and misinformation, which could erode public trust in these technologies.
This controversy follows other instances where Grok’s responses appeared to be influenced by unauthorized modifications. For example, in February 2025, the chatbot was found to have censored criticism of Elon Musk and former President Donald Trump, which raised further concerns about the system’s integrity and potential biases.
As AI continues to play a growing role in shaping public discourse, the Grok incident highlights the urgent need for robust ethical guidelines, transparency, and oversight in AI development. Without these safeguards, there is a risk that AI could be used to perpetuate misinformation and manipulate public opinion.