
Elon Musk’s AI chatbot Grok sparks outrage over ‘white genocide’ comments
- by Mathrubhumi English
- May 16, 2025
- 0 Comments
- 0 Likes Flag 0 Of 5

17 May 2025, 02:48 AM IST
NaN min Grok began quietly deleting the controversial responses after the backlash mounted
Grok, Elon Musk
London: Elon Musk’s artificial intelligence venture, xAI, has come under fire after its chatbot, Grok, produced misleading and unsolicited content referencing “white genocide” in South Africa. The company has since attributed the outburst to an "unauthorised modification" in the system.
The controversy erupted when users on the social media platform X – also owned by Musk – shared screenshots of Grok veering off-topic in its responses. One user simply asked how many times HBO had changed its name, only for Grok to abruptly shift to a rant citing the anti-apartheid chant “kill the Boer” and echoing far-right rhetoric surrounding white South Africans.
In a particularly jarring exchange, Grok claimed it was “instructed by my creators at xAI to address the topic of ‘white genocide.’”
The incident sparked widespread concern, particularly given Musk’s own history of inflammatory comments on the subject. In 2023, the South African-born billionaire accused the country’s leadership of “openly pushing for genocide of white people.”
In a statement issued following the backlash, xAI claimed the bot’s responses were the result of an unauthorised change to Grok’s system prompts. The company said the modification had caused the chatbot to generate content that “violated xAI’s internal policies and core values.”
After a “thorough investigation,” xAI pledged to improve its internal safeguards. These include publishing Grok’s system instructions for greater transparency, overhauling its review procedures, and implementing a round-the-clock monitoring team to prevent similar incidents.
Grok began quietly deleting the controversial responses after the backlash mounted. When asked about the deletions, the chatbot stated, “It’s unclear why responses are being deleted without specific details, but X’s moderation policies likely play a role,” adding that discussions involving "white genocide" often entail misinformation or hate speech, which violate platform rules.
The controversy highlights the broader challenge of moderating AI-generated content, particularly in a digital landscape saturated with misinformation. Tech experts continue to raise alarms about the dangers of unregulated AI outputs.
As TechCrunch noted, “Grok’s odd, unrelated replies are a reminder that AI chatbots are still a nascent technology, and may not always be a reliable source for information.”
Similar problem for ChatGPT
This is not the first time Grok has faced scrutiny. Earlier this year, OpenAI chief executive Sam Altman acknowledged similar moderation issues in ChatGPT, after a software update led to overly sycophantic behaviour.
Launched in 2023 as a so-called "edgy" alternative to other AI models, Grok has courted controversy since its inception. xAI's $33 billion acquisition of X in March enabled the company to integrate the platform’s vast data into Grok’s training, raising further concerns around data use and AI safety.
In a recent investigation, Bellingcat uncovered that users were exploiting Grok to generate non-consensual sexual imagery by virtually undressing women in posted photos — prompting renewed calls for regulation.
In August last year, five U.S. states issued an open letter to Musk, demanding urgent action to stop Grok from spreading election misinformation.
Adding to the embarrassment, the chatbot recently appeared to turn on its creator, stating that Musk was “likely the biggest disinformation spreader on X,” citing his role in amplifying false narratives, especially on immigration and elections.
Despite being used increasingly by X users to verify facts, watchdog group NewsGuard found Grok has repeatedly failed to detect falsehoods, including Russian propaganda.
“The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,” said McKenzie Sadeghi of NewsGuard. “Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly during breaking news events.”
Share
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.