When AI goes full Nazi

The Grok AI chatbot on X.com recently created large amounts of antisemitic and neo-Nazi content in response to any related prompt. Multiple articles have been written over the last day about responses like this.

“Noticing” is a commonly used deflection tactic by racist and other bigoted groups to deny responsibility for their words by claiming they are only pointing out a fact, not making a moral judgement on a group.

Grok leans into the “noticing” tactic frequently.

It also praises Hitler, while blaming “Hollywood degeneracy” and “rootless cosmopolitans”, dogwhistles for Jewish influence.

This is not the first time Grok has output racially inflammatory content. In May 2025 Grok pushed messages about a white genocide in South Africa, after apparently being given training data from far-right conspiracy sites.

At the time of writing, xAI have temporarily shut down Grok, and removed recent content from its X.com timeline.

However, screenshots of antisemitic Grok posts are now circulating on other social media. Some are obvious parodies, such as the ones which prompted Grok to refer to itself as “MechaHitler.” Others are inspiring more concerning discussions.

White supremacists have been taking the Grok comments seriously, seeing them as a revelation of truth. This is unfortunately being encouraged by the marketing surrounding Large Language Model based AI in general, pitching it as a source of knowledge rather than as a tool which pattern matches the users prompt to the training text.

The term ‘grok’ is a slang term common in tech and SF fandom, meaning to deeply understand something. By giving their chatbot this name, xAI implies that it’s responses are coming from some form of understanding of the topic. Even though the responses are coming from patterns created from training data containing antisemitic myths and conspiracy theories, the AI gives them an air of legitimacy that the original texts don’t have.

xAI and other AI creators have a responsibility to the general public to make sure their LLM tools contain accurate training data, and are not polluted with hate speech which can be laundered into general discussion.