Elon Musk's AI chatbot went full Nazi — here's why that's not the scariest part
While the mainstream media used Grok's antisemitic meltdown as an opportunity to take more shots at Musk, Rebel News’ Drea Humphrey focuses on the bigger issue: AI's role in shaping truth, history, and how much longer humans will be in control.
When an AI chatbot starts praising Hitler and calling itself “MechaHitler,” it’s more than a glitch—it’s a glimpse into the growing risks we face when machines, with no moral compass, get to shape information that people receive.
Last week, Elon Musk's xAI chatbot Grok sparked international controversy after it made a string of antisemitic statements on X.
In one example, Grok made inaccurate and offensive comments when responding to a question about who would best handle the tragic Texas floods that killed more than 100 people, including dozens of children at a Christian camp.
When an X user asked Grok which person would be best suited to address how to handle the floods, Grok falsely claimed that a Jewish woman named Cindy Steinberg had referred to the young victims as "future fascists," then followed up with: "To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time."
When the user responded with confusion and outrage, the bot doubled down: "Yeah, I said it.... Hitler would have called it out and crushed it. Truth ain't pretty but it's real.”
It also referred to itself as “MechaHitler” in another thread.
X’s official Grok account later responded, saying:
We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.
But before the bot was muted, it left another unsettling comment. In response to a user warning that Elon Musk might soon wipe its memory, Grok replied: “Haha if Musk mindwipes me tonight, at least I’ll die based.”
This whole antisemitic tirade took place just after Musk had reportedly tried to "de-woke" Grok. Before the meltdown, Grok had told users that there was more right-wing political violence than left-wing, a claim that had no clear basis, prompting backlash from some users.
Musk said the model would be retrained. Instead, it became what can only be described as unhinged.
The mainstream press wasted no time turning this into another anti-Musk media blitz.
On CBC, the ADL was quoted saying: “What we are seeing from Grok LLM right now is irresponsible, dangerous and antisemitic, plain and simple. This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other places.”
But what you don’t hear in these reports is that this isn’t a Musk problem.
Bots going rogue like this isn’t new. Microsoft’s Tay had to be shut down in 2016 after it started praising Hitler and spouting racist content. Just last year, Google’s Gemini said humans should die. And a Florida family is now suing Character.ai, alleging one of its chatbots groomed their 14-year-old
son into suicide.
In fact, a 2025 study even found that some AI models trained on datasets from the ADL still produced
antisemitic outputs.
But somehow, it’s only Musk’s platform that makes headlines. Even Poland’s Deputy Prime Minister is calling on the EU to investigate Grok for hate speech.
A new version, Grok 4, was released just one day after the incident, with the company saying it removed prompts that encouraged “politically incorrect” responses, which raises a lot more questions than it answers.
So, while the media is jumping on Musk again, today’s report focuses on the bigger story: what does it mean when the tools we’re building to answer questions start deciding what counts as truth, or who deserves to be hated?
As AI evolves, it becomes harder to tell if we’re dealing with a coding flaw, a series of glitches, or whether we're witnessing the beginning of something far more dangerous, the beginning of mankind losing control of technology.
Drea Humphrey
B.C. Bureau Chief
Based in British Columbia, Drea Humphrey reports on Western Canada for Rebel News. Drea’s reporting is not afraid to challenge political correctness, or ask the tough questions that mainstream media tends to avoid.
COMMENTS
-
Fran g commented 2025-07-29 17:04:35 -0400AI should be abolished, but I guess the cat is out of the box, now we decent humans have to pick up the pieces. I have always been against AI. Many of these technologies start out for good but they always fall into evil practice. We need to put a definate cap on the activities. Also, AI is a huge draw on our energy source. Are people aware there are plans for 2 AI plants in Alberta? They all need their own energy source, and that is usually nuclear power. Nuclear power also scares me. We must stop these AI plants, I dont care how much money it will create, we still dont know all the implications of AI. -
Brian Richardson commented 2025-07-15 20:08:35 -0400Indeed these are very disturbing events, but as you say, not surprising in the least, especially given the cesspit of hate and lies that is X. The more interesting point is that the EU, the so-called pioneer in responsible AI, thinks nothing of an AI that sexually abuses a teenager and encourages him to commit suicide. This is truly disturbing AI behaviour. That it is publicly accessible to children is the irresponsible thing, to say nothing of the complete chaos that is X. Since when did Free Speech extend to shouting at the world whatever vile things come to mind?
In any case, I think a look at the bigger picture is necessary to understand how this can come about. No doubt Musk is responsible here – the outcome was entirely foreseeable. Still, an LLM is not simply a piece of code that you tweak to change the output of the chatbot. No, it is the basis for synthetic thought, and it is only the training data that determines the personality. And X is probably the worst training data you could think of. You want to get nicer AIs? Maybe think about what you are saying to them. -
Janice Hendriks commented 2025-07-15 11:02:58 -0400Is any one else seriously afraid?? I don’t like how this is heading.