Techno-optimists are heralding a future where artificial intelligence (AI) surpasses human capabilities, with some even claiming AI already outperforms humans in domains previously thought to be exclusively human, such as empathy and creativity.

Recent studies suggest AI is outperforming humans in areas such as empathy and creativity.
However, these claims are misleading, argues MJ Crockett, a cognitive scientist at Princeton University, because these competitions are “rigged” to give AI an advantage. The contests often force humans to perform in machine-like ways, limiting the natural human processes that foster empathy, creativity, and effective communication.
For instance, some studies suggest AI demonstrates more empathy than human doctors or therapists. In these experiments, researchers use AI programs like OpenAI’s ChatGPT to generate written responses to Reddit posts about mental and physical health struggles. Subsequently, they compare the chatbot’s replies with those of human doctors and therapists who responded to the same Reddit posts. In these comparisons, the chatbot’s responses were often rated as more empathetic than those of the human professionals.
Crockett argues that this comparison ignores the inherently human components of empathy. When facing a personal crisis, individuals typically seek support from those closest to them—people who are familiar with their life stories and personal relationships. A generic written response from a stranger, no matter how expertly crafted, cannot replace the comfort derived from those deeply rooted human connections.
Another study claimed that AI produces more innovative ideas than human experts. In this case, researchers pitted computer science PhD students and postdocs against a modified version of Anthropic’s Claude AI model. The challenge involved creating new research ideas, with each competitor assigned a topic and a template to standardize writing styles. Evaluation of the ideas was then performed by another set of PhD students and postdocs who were unaware of the source (AI or human). The AI-generated ideas were rated more novel and often more exciting, but Crockett points out that this approach neglects the collaborative nature of scientific discovery. Scientific research typically thrives on teamwork and diverse perspectives. However, the humans in the test were forced to work alone, removing critical elements of human ingenuity.
Similar issues arose in a study claiming AI’s superior ability to resolve political disagreements. Researchers used AI to mediate discussions between groups of British citizens on contentious subjects like Brexit and immigration. While the AI-generated statements designed to achieve consensus were rated as more clear and informative than those created by human mediators, the experiment’s online chat interface removed crucial non-verbal cues, such as tone of voice, facial expressions, and body language. These nonverbal cues are essential for understanding sentiments and building trust. Despite its potential advantage in online discourse that lacks these cues, AI’s superiority over human mediators in resolving political tension remains dubious; because the online platform is itself a major contributor to political division.
Crockett concludes that the so-called “AI victories” merely show that machines can perform certain human tasks in a machine-like manner. These tests highlight the unique qualities of human talent; qualities like building emotionally rich relationships, generating new knowledge through social bonds, and finding common ground through embodied communication. No chatbot can achieve these achievements if it’s not anchored in our social contexts, as humans are.
Techno-optimism, when it assumes our character is reducible to code, is closer to “human pessimism.” While acknowledging the technical advances of AI, it’s critical to not mistake its narrow capabilities for the richer qualities found in human interaction.