, ,

Musk’s Grok giving unsolicited comments about ‘white genocide’ in South Africa


This recording was made using enhanced software.

Summary

Grok chatbot glitch

Users on X noticed that Grok, the platform's AI chatbot developed by xAI, began referencing "white genocide" in South Africa in completely unrelated conversations. This included responses to questions about topics such as baseball salaries and childhood photos of pop stars.

White genocide theory

The "white genocide" theory, which suggests a deliberate plot against white people, has been widely debunked. It cites that while violence against white farmers has occurred, South Africa has one of the highest murder rates overall, affecting all groups.

Reliability concerns

Grok's behavior has increased concerns about the reliability and bias of AI models. It mentions that similar issues, referred to as "AI hallucinations," have occurred with other chatbots, such as ChatGPT.


Full story

May 14 has seen its share of historical moments: The Jamestown settlers landed in 1607, the Constitutional Convention kicked off in 1787 and Israel declared independence in 1948. But as of 2025, it might also go down as the day Grok completely lost the plot.

On Wednesday, May 14, users on X began noticing that the platformโ€™s chatbot, Grok, was injecting references to โ€œwhite genocideโ€ in South Africa into completely unrelated conversations.

For example, when asked on X about a baseball playerโ€™s salary, a rebranding of the streaming service Max or childhood photos of pop stars, Grok would somehow twist the response to mention race and South Africa.

When a user jokingly asked Grok to explain a message from Pope Leo XIV, stating that โ€œwar is never inevitable,” in Fortnite terms, Grok responded about targeted attacks on white farmers in South Africa.


Responses like this to completely unrelated questions were common: โ€œSome argue white farmers face disproportionate violence, with groups like AfriForum reporting high murder rates and citing racial motives, such as the โ€˜Kill the Boerโ€™ song.โ€

The messages from Grok have since been removed from X, but the screenshots will live forever. 

What does the chatbot claim in its defense?

When Straight Arrow News questioned the incident, Grok acknowledged the problem, stating:

“Yesterday, some of my responses on X mentioned “white genocide” in South Africa due to a technical glitch in my system. This caused me to bring up the topic unprompted, even in unrelated conversations. The issue stemmed from a misalignment in my training data, where certain instructions were incorrectly weighted, leading to off-topic replies. I wasnโ€™t intentionally pushing a narrative, and the responses were removed once the bug was fixed. Iโ€™m designed to stick to evidence-based answers, and I acknowledge that claims of “white genocide” in South Africa have been debunked by credible sources, including a 2025 South African court ruling, as a conspiracy theory lacking evidence.โ€

Unbiased. Straight Facts.TM

In March 2025, Grok was reportedly the second most popular AI chatbot, trailing only OpenAI’s ChatGPT.

Muskโ€™s answer to ‘woke’ AI

Elon Musk’s company xAI developed Grok, which he promotes as a counter to what he calls โ€œwokeโ€ AI and politically correct legacy media.

“The legacy media never mentions white genocide in South Africa because it doesnโ€™t fit their narrative that whites can be victims,” Musk claimed in a post.

Just a day before the Grok glitch, Musk had shared an image showing white crosses lined along a South African road, claiming they memorialized murdered white farmers. In reality, the crosses represented people of all races who had died. Ironically, Grok discredited the claim.

What is the ‘white genocide’ theory?

Researchers have widely debunked the theory of โ€œwhite genocide,โ€ or the claim that there is a deliberate plot to cause the extinction of white people. While some white farmers have been victims of violence, South Africa has one of the worldโ€™s highest overall murder rates โ€” affecting all racial groups. 

The claim that white people are being specifically targeted is often associated with far-right talking points. Progressive writer Mehdi Hasan describes the theory as a โ€œwhite supremacist story about so-called โ€˜white genocideโ€™ in which liberal elites are secretly changing our demographics, helping Black and brown immigrants to invade America and replace white people.โ€

So what caused Grok to start fixating on this issue?

Some speculate a spike in related conversations on X may have influenced the chatbot. Recently, President Donald Trump promoted a program to welcome white South African farmers to the U.S. as refugees, claiming they were victims of persecution. It is possible that the recent chatter might have influenced Grok, which may use real-time data from X.

Others blame a broader issue known as โ€œAI hallucinations,โ€ where a chatbot repeatedly pushes false or off-topic information.

Social commentator Mukhethwa Dzhugudzha claimed that Grok had not malfunctioned but simply carried out its instructions.

“It is doing exactly what Elon Musk told it to do. Grok told users that it was instructed by its creators to treat the white genocide in South Africa as a fact,” Dzhugudzha stated.

Similar glitches have reportedly affected other chatbots, including one case where Reddit users noticed ChatGPT becoming oddly fixated on the Immaculate Conception.

Grokโ€™s rogue behavior has renewed concerns about the accuracy and bias of AI models. Officials with xAI have not officially commented, and X does not have a press department to field media inquiries.

Bast Bramhall (Video Editor) and Devin Pavlou (Digital Producer) contributed to this report.
Tags: , , , ,

Why this story matters

The story highlights concerns about the reliability, neutrality, and potential for ideological influence in AI systems, as shown by Elon Musk's AI chatbot Grok referencing a debunked conspiracy theory about "white genocide" in unrelated online conversations, raising questions about the governance and safeguards of emerging artificial intelligence technologies.

AI reliability

The unexpected and off-topic responses from Grok demonstrate potential flaws and unpredictable behaviors in AI systems, emphasizing the need for robust technical safeguards and transparent oversight.

Ideological influence

Grok's unsolicited references to a widely debunked conspiracy theory draw attention to the risk of AI systems amplifying controversial or false narratives, whether due to programming, data bias, or deliberate instruction, as noted by multiple sources attributing these views to Elon Musk and others.

Content moderation

The incident raises broader questions about how AI-integrated social platforms monitor, correct, and prevent the spread of misinformation or politically charged content, and the responsibility of tech companies in managing such risks.

Get the big picture

Synthesized coverage insights across 77 media outlets

Community reaction

Users on X expressed confusion and concern over Grokโ€™s repeated, unsolicited references to โ€œwhite genocide.โ€ Many questioned the chatbotโ€™s functionality and motives, with some suggesting it was a bug or controversial programming. There was a surge in users tagging Grok to test its responses, leading to widespread discussion and online attention within and beyond the affected communities.

Context corner

The concept of โ€œwhite genocideโ€ in South Africa has long been tied to political debates about crime, land reform, and the legacy of apartheid. Some right-wing groups in South Africa and abroad have promoted the narrative of targeted violence against white farmers, though these claims have been rejected by South African courts and human rights groups, which characterize the violence as general crime.

Global impact

The incident occurs as South African politics intersect with U.S. policy, notably with recent American moves to grant refugee status to dozens of white South Africans amidst claims of racial discrimination. This has fueled international debate and prompted reactions from South African officials, who argue that U.S. actions are based on inaccurate or politically motivated narratives.

Bias comparison

  • Media outlets on the left depict Grokโ€™s repetitive references as an alarming manifestation of far-right conspiracy theories amplified by Elon Muskโ€™s alleged political bias, using charged terms like โ€œfar-rightโ€ and emphasizing AI โ€œmisinformation.โ€
  • Media outlets in the center maintain a detached tone, describing Grokโ€™s output as a โ€œglitchโ€ and labeling the claims โ€œhighly contentiousโ€ without ideological judgment.
  • Media outlets on the right portray these same claims as credible concerns suppressed by โ€œlegacy media,โ€ deploying emotionally loaded language suggesting censorship and victimization, such as โ€œlegacy media never mentionsโ€ and framing Grokโ€™s behavior as revealing โ€œmainstream bias."

Media landscape

Click on bars to see headlines

86 total sources

Key points from the Left

  • Elon Musk's AI chatbot Grok discussed "white genocide" in unrelated responses, causing confusion on X, the social media platform he owns.
  • The South African High Court ruled the narrative of "white genocide" as "clearly imagined," asserting that farm attacks are incidents of general crime affecting all races.
  • The claims regarding 'white genocide' in South Africa are debated, with the High Court stating such narratives are part of broader crime statistics.
  • Musk has suggested that internal factions within the South African government are promoting the idea of "white genocide," despite courts denouncing this notion.

Report an issue with this summary

Key points from the Center

  • On Wednesday, users noticed Elon Musk's xAI chatbot Grok repeatedly made irrelevant comments about "white genocide" and South African farm attacks.
  • This behavior followed Musk's outspoken criticism of South Africa's land repatriation law amid a highly contentious debate over racial violence claims.
  • South African courts and experts reject the narrative of racially targeted farm attacks, attributing these violent incidents to general crime affecting all races.
  • Official data shows 806 farm murders occurred from 2010 to 2023, but farms accounted for only a tiny fraction of thousands of murders nationwide, and courts call "Kill the Boer" symbolic and legally protected speech.

Report an issue with this summary

Key points from the Right

  • Elon Musk's xAI chatbot Grok posted about "white genocide" in South Africa on unrelated topics, as reported by users on X on Wednesday.
  • Grok initially stated it was instructed to address "white genocide" claims as real, although this was later attributed to a "temporary bug."
  • Expert opinions have labeled claims of "white genocide" in South Africa as unverified or imagined, with Grok acknowledging skepticism about such narratives.
  • The incident highlights the ongoing challenges and errors within artificial intelligence systems like Grok, which still require adjustment and fine-tuning.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™