If you’ve spent any time on X (formerly Twitter) over the last year, you’ve likely encountered the buzz around Grok, the generative artificial intelligence chatbot developed by Elon Musk’s xAI. Unlike many of its more cautious AI counterparts, Grok is designed to be a bit of a troublemaker, and its irreverent tone and unfiltered responses have made it a subject of considerable online conversation.
Grok, named after the term coined by sci-fi author Robert A. Heinlein and modeled after The Hitchhiker’s Guide to the Galaxy, quickly gained traction. The chatbot’s willingness to engage in casual language, slang, and even outright swearing has led users to test its boundaries, asking outlandish and provocative questions. Its “Unhinged” mode, an option for premium subscribers, leans into this rebellious streak, providing unpredictable and often humorous responses. Grok’s access to real-time X posts allows it to learn from the platform’s often chaotic discourse.
This approach has certainly stirred up controversy. One X user, frustrated by a delayed response to a query, included a Hindi swear word. Grok swiftly fired back with the same expletive. The exchange went viral, with some users amused by the chatbot’s audacity while others questioned its ethics. Yet Musk has defended Grok’s tone, claiming it reflects the mission to create a more relatable, human-like AI.
In a blog post, xAI described Grok as an assistant with “a bit of wit and a rebellious streak,” capable of answering questions and even suggesting what questions to ask. The firm has also emphasized Grok’s advanced reasoning capabilities through its “Think” and “Big Brain” modes, allowing it to manage complex tasks with sophistication. In February, xAI showcased a beta version of Grok 3, describing it as the company’s “most advanced model yet: blending strong reasoning with extensive pretraining knowledge … Grok 3’s reasoning capabilities, refined through large scale reinforcement learning, allow it to think for seconds to minutes, correcting errors, exploring alternatives, and delivering accurate answers.”
Controversy has also swirled around claims of censorship. It was reported that Grok had been instructed to ignore sources critical of its creator, Elon Musk, and former U.S. President Donald Trump. In one exchange, Grok initially named Musk as a major spreader of disinformation but also revealed it had been directed to dismiss criticism targeting both Musk and Trump. Following public backlash, xAI maintained it had removed the instruction. Grok now openly names Musk when asked about disinformation on X, claiming it no longer has biases—but the incident spurred concerns over the potential to manipulate AI to protect influential figures.
While xAI attributed the censorship to an internal error, the incident underscores several concerns about AI transparency and ethical oversight. In India, users have tested the chatbot with politically charged questions, assessing its ability to handle sensitive topics. One asked Grok to list “the lies peddled by PM Modi,” eliciting a response that cited unfulfilled promises and exaggerated claims. This raises broader questions about AI’s role in society: Should these systems be neutral, polite, intentional, and sanitized, or is there a place for AIs that reflect the messiness and controversy inherent in human societies and communication?