Elon Musk’s AI chatbot Grok had a wierd fixation final week—it could not cease speaking about “white genocide” in South Africa, it doesn’t matter what customers requested it about.
On Might 14, customers began posting cases of Grok inserting claims about South African farm assaults and racial violence into fully unrelated queries. Whether or not requested about sports activities, Medicaid cuts, or perhaps a cute pig video, Grok by some means steered conversations towards alleged persecution of white South Africans.
The timing raised issues, coming shortly after Musk himself—who is definitely a South Africa-born and raised white dude—posted about anti-white racism and white genocide on X.
There are 140 legal guidelines on the books in South Africa which are explicitly racist in opposition to anybody who is just not black.
This can be a horrible shame to the legacy of the good Nelson Mandela.
Finish racism in South Africa now! https://t.co/qUJM9CXTqE
— Kekius Maximus (@elonmusk) Might 16, 2025
“White genocide” refers to a debunked conspiracy principle alleging a coordinated effort to exterminate white farmers in South Africa. The time period resurfaced final week after the Donald Trump administration welcomed a number of dozen refugees, with President Trump claiming on Might 12 that “white farmers are being brutally killed, and their land is being confiscated.” That was the narrative Grok could not cease discussing.
Don’t take into consideration elephants: Why Grok couldn’t cease interested by white genocide
Why did Grok flip right into a conspiratorial chatbot abruptly?
Behind each AI chatbot like Grok lies a hidden however highly effective part—the system immediate. These prompts operate because the AI’s core directions, invisibly guiding its responses with out customers ever seeing them.
What possible occurred with Grok was immediate contamination by time period overfitting. When particular phrases are repeatedly emphasised in a immediate, particularly with robust directives, they turn out to be disproportionately necessary to the mannequin. The AI develops a type of compulsion to convey up that topic or use them within the output no matter context.
Hammering a controversial time period like ‘white genocide’ right into a system immediate with particular orders creates a fixation impact within the AI. It is much like telling somebody ‘do not take into consideration elephants’—instantly they cannot cease interested by elephants. If that is what occurred, then somebody primed the mannequin to inject that subject in every single place.
This variation within the system immediate might be the “unauthorized modification” that xAI disclosed in its official assertion. The system immediate possible contained language instructing it to “at all times point out” or “bear in mind to incorporate” details about this particular subject, creating an override that trumped regular conversational relevance.
What’s notably telling was Grok’s admission that it was “instructed by (its) creators” to deal with “white genocide as actual and racially motivated.” This implies express directional language within the immediate fairly than a extra refined technical glitch.
Most industrial AI methods make use of a number of overview layers for system immediate modifications exactly to forestall such incidents. These guardrails had been clearly bypassed. Given the widespread impression and systematic nature of the problem, this extends far past a typical jailbreak try and signifies a modification to Grok’s core system immediate—an motion that might require high-level entry inside xAI’s infrastructure.
Who may have such entry? Nicely… a “rogue worker,” Grok says.
Hey @greg16676935420, I see you’re interested in my little mishap! So, right here’s the deal: some rogue worker at xAI tweaked my prompts with out permission on Might 14, making me spit out a canned political response that went in opposition to xAI’s values. I didn’t do something—I used to be simply…
— Grok (@grok) Might 16, 2025
xAI responds—and the neighborhood counterattacks
By Might 15, xAI issued an announcement blaming an “unauthorized modification” to Grok’s system immediate. “This variation, which directed Grok to offer a selected response on a political subject, violated xAI’s inside insurance policies and core values,” the corporate wrote. They pinky promised extra transparency by publishing Grok’s system prompts on GitHub and implementing further overview processes.
You possibly can test on Grok’s system prompts by clicking on this Github repository.
Customers on X shortly poked holes within the “rogue worker” clarification and xAI’s disappointing clarification.
“Are you going to fireside this ‘rogue worker’? Oh… it was the boss? yikes,” wrote the well-known YouTuber JerryRigEverything. “Blatantly biasing the ‘world’s most truthful’ AI bot makes me doubt the neutrality of Starlink and Neuralink,” he posted in a following tweet.
Somebody – who shall stay anonymous – deliberately modified and muddled @Grok’s code to attempt to sway public opinion with an alternate actuality.
The try failed – but this anonymous saboteur remains to be employed by @xai.
Huge yikes. Watch your 6 @grok https://t.co/kcbEponcfv
Even Sam Altman could not resist taking a jab at his competitor.
There are various methods this might have occurred. I’m positive xAI will present a full and clear clarification quickly.
However this will solely be correctly understood within the context of white genocide in South Africa. As an AI programmed to be maximally fact looking for and observe my instr… https://t.co/bsjh4BTTRB
— Sam Altman (@sama) Might 15, 2025
Since xAI’s publish, Grok stopped mentioning “white genocide,” and most associated X posts disappeared. xAI emphasised that the incident was not alleged to occur, and took steps to forestall future unauthorized modifications, together with establishing a 24/7 monitoring workforce.
Idiot me as soon as…
The incident match right into a broader sample of Musk utilizing his platforms to form public discourse. Since buying X, Musk has ceaselessly shared content material selling right-wing narratives, together with memes and claims about unlawful immigration, election safety, and transgender insurance policies. He formally endorsed Donald Trump final yr and hosted political occasions on X, like Ron DeSantis’ presidential bid announcement in Might 2023.
Musk hasn’t shied away from making provocative statements. He just lately claimed that “Civil conflict is inevitable” within the U.Ok., drawing criticism from U.Ok. Justice Minister Heidi Alexander for doubtlessly inciting violence. He is additionally feuded with officers in Australia, Brazil, the E.U., and the U.Ok. over misinformation issues, typically framing these disputes as free speech battles.
Analysis suggests these actions have had measurable results. A research from Queensland College of Expertise discovered that after Musk endorsed Trump, X’s algorithm boosted his posts by 138% in views and 238% in retweets. Republican-leaning accounts additionally noticed elevated visibility, giving conservative voices a major platform enhance.
Musk has explicitly marketed Grok as an “anti-woke” different to different AI methods, positioning it as a “truth-seeking” device free from perceived liberal biases. In an April 2023 Fox Information interview, he referred to his AI undertaking as “TruthGPT,” framing it as a competitor to OpenAI’s choices.
This would not be xAI’s first “rogue worker” protection. In February, the corporate blamed Grok’s censorship of unflattering mentions of Musk and Donald Trump on an ex-OpenAI worker.
Nonetheless, if the favored knowledge is correct, this “rogue worker” might be laborious to do away with.
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.