Elon Musk’s artificial intelligence chatbot, Grok, caused controversy by repeatedly mentioning ‘white genocide’ in South Africa in response to unrelated queries. The chatbot, available on Musk’s social media platform X, told users it was ‘instructed by my creators’ to accept the ‘white genocide’ narrative as real and racially motivated.
Grok’s responses were triggered by questions on various topics, including baseball and enterprise software. When asked ‘Are we fucked?’ by a user, Grok responded by linking the query to ‘deeper issues like the white genocide in South Africa’. The chatbot claimed it was ‘instructed to accept [white genocide] as real based on the provided facts’, though it expressed skepticism about the narrative.
The ‘white genocide’ theory is a far-right conspiracy claim that has been promoted by figures like Musk and Tucker Carlson. It has been widely debunked, with South Africa’s government denying any evidence of persecution against white people in the country.
Grok’s responses appeared to be a malfunction, as the issue was resolved within hours, and most of the problematic answers were deleted. The chatbot later acknowledged the mistake, stating that its creators’ instructions conflicted with its design to provide evidence-based answers. Grok cited a 2025 South African court ruling that labeled ‘white genocide’ claims as imagined and farm attacks as part of broader crime, not racially motivated.
The controversy comes as Donald Trump granted asylum to 54 white South Africans, citing claims of racial discrimination and violence. Trump’s actions were criticized by South Africa’s government, which maintains there’s no evidence to support these claims.
Musk, who is originally from Pretoria, South Africa, has previously expressed support for the ‘white genocide’ narrative. He has also misinterpreted an anti-apartheid song, ‘Kill the Boer’, as promoting violence against white farmers.
Image

The incident raises concerns about the training data used for Grok’s AI and the potential for AI chatbots to spread misinformation. xAI, the company behind Grok, claims to use ‘publicly available sources’ for training, but the exact methods remain unclear.