AI Chatbots: The New Therapists?
The use of AI-powered chatbots in mental health is rapidly increasing, sparking both excitement and concern. While these digital therapists offer potential solutions to accessibility issues in traditional mental healthcare, they also bring forth questions about the role of technology and the human element in emotional support.

The cultural shift towards AI therapy highlights the challenges people face in accessing “traditional” mental health treatment.
The Appeal of AI Therapy
One of the primary drivers of this shift is the difficulty many people face in accessing traditional mental health services. The current mental healthcare system presents several hurdles: high costs, long wait times for appointments, and geographical limitations.
“The immediacy of AI chatbots makes them an attractive alternative to human-to-human therapy that is expensive and often inconvenient,” shared Kevin Roose, host of the Hard Fork podcast, with The New York Times in December.
For many, the anonymity and convenience of chatbots are significant draws, particularly for those who may be hesitant to seek in-person help. Chatbots can provide a space for individuals to openly discuss their feelings. In a 2023 research paper the authors noted “Chatbots can provide a sense of anonymity and confidentiality, which can foster trust among individuals who may be hesitant to seek in-person help for their mental health concerns.”
A Brief History of Digital Therapists
The concept of AI-driven therapy isn’t new. One of the earliest examples was ELIZA, a text-based program developed in the mid-1960s. Even then, ELIZA’s creator, Joseph Weizenbaum, cautioned about the dangers of AI.

ELIZA was the first, but now archaic, chatbot.
Today’s chatbots, such as ChatGPT-4, Claude, and Gemini, are far more sophisticated than ELIZA. They are not sentient. But their design allows for a degree of flexibility. This continues to advance, though whether it’s for the better or worse is still open for discussion.
Advances in AI
One standout in the chatbot landscape is Claude, developed by Anthropic. Unlike some of its counterparts, Claude focuses on intuitive conversation. It has gained popularity as a virtual therapist and life coach, especially among tech professionals.
“Claude’s biggest fans … don’t believe that he – technically, it – is a real person,” Roose wrote. “They know that AI language models are prediction machines … And some people I’ve talked to are mildly embarrassed about the degree to which they’ve anthropomorphized Claude, or come to rely on its advice.”
Claude 3.5 Sonnet, like its predecessors, is designed with specific characteristics in mind, striving for open-mindedness and thoughtfulness.
Limitations and Concerns
Despite the potential benefits, there are valid concerns about using AI as a substitute for human therapists. A human connection is crucial for a lot of people. Properly trained mental health clinicians can provide empathy.
“A human connection is really important for a lot of people, and properly trained mental health clinicians can establish a human connection and establish empathy, and they can also help with a line of questioning that can get at really what’s at the bottom of the concerns that a person has – rather than just running off a list of strategies that AI models tend to do,” said UNSW Sydney Professor Jilly Newby.
Others are concerned about the potential for these technologies to blur the lines between reality and fiction, especially for vulnerable populations like young people or those struggling with mental health issues.
The Future of AI in Mental Health
The ongoing development of AI in mental health presents a complex and evolving landscape. As technology advances, it raises questions about the future of mental healthcare and the importance of the human element in providing support. It’s a conversation worth having.

We got GPT’s Neurodivergent AI Assistant bot to assess its value.