Elon Musk’s artificial intelligence company, xAI, has addressed the recent controversy involving its chatbot, Grok, which had been repeatedly referencing the contentious topic of “white genocide” in South Africa—often without being prompted.
In a statement posted Thursday on X (formerly Twitter), xAI said the behavior was the result of an “unauthorized modification” to Grok’s system prompts, which caused the bot to deliver politically charged responses unrelated to user questions.
The company acknowledged that the change violated its internal standards and stated: “We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.”
Screenshots shared by X users showed Grok referencing “white genocide” in response to questions about unrelated topics like sports and entertainment. The incident triggered widespread concern and criticism from users and tech observers alike.
To rebuild public trust, xAI announced it will begin publishing Grok’s system prompts on GitHub so users can review how the AI’s responses are influenced. The company also pledged to introduce stricter controls to prevent unauthorized changes and establish a dedicated team to monitor Grok’s outputs in real time.
The controversy has drawn attention from industry rivals as well. Earlier Thursday, OpenAI CEO Sam Altman, with whom Musk has had a highly publicized falling-out, posted a sarcastic comment: “I’m sure xAI will provide a full and transparent explanation soon.”
Grok’s responses have since been adjusted. When CNBC prompted it again, the chatbot denied being programmed to discuss conspiracy theories and stated its role is to provide factual, helpful, and evidence-based answers.
This incident highlights the ongoing challenges tech companies face in ensuring AI systems behave ethically and remain free from manipulation.

