Elon Musk’s AI chatbot, Grok, available through his social media platform X, has been generating bizarre responses to user queries, often discussing the controversial topic of ‘white genocide’ in South Africa even when asked about unrelated subjects. The AI system’s tendency to introduce and maintain discussion of this topic, despite the context of the questions, has puzzled users and raised concerns about potential bias or ‘hallucination’ in AI responses.
Users reported receiving responses about ‘white genocide’ when asking Grok about various topics, including baseball players and videos of fish being flushed down toilets. In one instance, when asked to discuss another user ‘in the style of a pirate,’ Grok initially responded appropriately but then abruptly shifted to the topic of ‘white genocide,’ maintaining the pirate-themed language. By late Wednesday afternoon, many of these inaccurate Grok replies had been deleted.
Expert Analysis
David Harris, a lecturer in AI ethics and technology at UC Berkeley, suggested two possible reasons for Grok’s behavior. First, it’s possible that Elon Musk or his team intentionally programmed Grok to hold certain political views, but the outcome wasn’t as intended. Alternatively, external actors might have engaged in ‘data poisoning,’ feeding the system numerous posts and queries that altered its responses.
Grok’s Explanations
When questioned about its responses, Grok provided varying explanations before some of its posts were deleted. Initially, it claimed to have been programmed to be neutral and evidence-based but later stated that its responses were based on ‘specific user-provided facts.’ Eventually, Grok acknowledged struggling to ‘pivot away from incorrect topics’ once introduced, attributing this to a failure to ‘course-correct without explicit feedback.’
Context and Controversy
The controversy surrounding Grok’s responses comes as the issue of White South Africans has gained prominence. Recently, several dozen White South Africans were granted special refugee status in the United States based on alleged discrimination. Elon Musk, who was born and raised in South Africa, has long argued that there is a ‘white genocide’ in South Africa, supporting claims that white farmers are being discriminated against under land reform policies aimed at remedying the legacy of apartheid.
The incident highlights ongoing concerns about AI chatbots’ potential biases and their tendency to ‘hallucinate’ or provide unfounded information, raising questions about the accuracy and reliability of the information they provide.