Researchers Uncover Striking Similarities Between AI Chatbots and Human Language Disorders
Artificial intelligence (AI) tools, such as ChatGPT and Llama, have become increasingly prevalent in daily life, impressing users with their fluency. However, these large language model (LLM)-based agents often provide convincing yet incorrect information. Researchers at the University of Tokyo have drawn parallels between this phenomenon and a human language disorder known as aphasia, where sufferers speak fluently but make meaningless or hard-to-understand statements.
The research team, led by Professor Takamitsu Watanabe from the International Research Center for Neurointelligence, used a method called energy landscape analysis to compare patterns in resting brain activity from people with different types of aphasia to internal data from several publicly available LLMs. The analysis revealed striking similarities between the two.
“You can imagine the energy landscape as a surface with a ball on it,” Watanabe explained. “When there’s a curve, the ball may roll down and come to rest, but when the curves are shallow, the ball may roll around chaotically.” In the context of aphasia, the ball represents the person’s brain state, while in LLMs, it represents the continuing signal pattern in the model based on its instructions and internal dataset.
The study has significant implications for both neuroscience and AI development. For neuroscience, it offers a potential new method to classify and monitor conditions like aphasia based on internal brain activity rather than just external symptoms. For AI, it could lead to better diagnostic tools to improve the architecture of AI systems.
While the researchers caution against making too many assumptions about the similarities, they believe understanding these internal parallels may be the first step toward developing smarter, more trustworthy AI. “We’re not saying chatbots have brain damage,” Watanabe clarified. “But they may be locked into a kind of rigid internal pattern that limits how flexibly they can draw on stored knowledge, just like in receptive aphasia.”
The research was supported by various grants from the Japan Society for Promotion of Sciences, The University of Tokyo Excellent Young Researcher Project, and other organizations. The study was published in the journal Advanced Science under the title “Comparison of large language model with aphasia.”