artificial intelligence
11 Jul 2024

Could Artificial Intelligence Replace Humans as Therapists?

For years the humanities were thought of as one of the final holdouts against the steady march of automatisation. Compassion, empathy, the ability to read non-verbal signals and an intuitive understanding of subconscious processes were abilities that we thought were beyond the reach of virtual therapists and artificial intelligence, as far back as the 1960’s computer scientists and linguists were designing virtual therapists that mimicked some of these human qualities and some of them were surprisingly successful.

The Evolution of Virtual Therapists: From ELIZA to Modern Artificial Intelligence

Joseph Weizenbaum’s (1966) simulated language program ELIZA (named after Eliza Doolittle from George Bernard Shaw’s Pygmalion) was able to mimic human conversation, giving the illusion of understanding on the part of the program. While ELIZA had many functions, the most interesting program was the DOCTOR script which simulated Rogerian psychotherapy. The key skills of paraphrasing, clarifying and summarizing lent themselves particularly well to ELIZA’s source code. In fact, this program (which was simplistic by today’s standards) was so convincing that many test clients became emotionally attached to the program. This prompted Weizenbaum to comment that “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

One explanation for this phenomenon may lie in people’s propensity to attribute human emotions and characteristics to non-human organisms or inanimate objects (or in this case, computer programs). This can result in a kind of psychological imprinting similar to what we saw in the 2013 Joachim Phoenix movie HER (see Lorenz, 1966). Researchers believe this may be down to what is called apophenia; a tendency to perceive meaningful connections between unrelated things. So, if we are naturally hard-wired to make these connections, it is possible that we could also encounter transference in the relationship between a human client and an artificial therapist. Even with ELIZA’s crude programming, there does seem to be evidence of a strong emotional attachment between the two.

Human Attributes in Artificial Intelligence: Emotional Connection and Psychological Imprinting

AI has come a long way since 1964. There are now countless programs, apps and tools aimed at assisting therapists in their work and still more that are designed to provide psychological and emotional support directly to clients. The Veterans Association in America recently combined virtual reality with artificial intelligence to provide an effective therapy for veterans experiencing post traumatic stress disorder (PTSD). Indeed, initial findings suggest that VR and AI-assisted therapy is at least as effective as human-to-human therapy in treating PTSD. Other apps, such as TESS, WYSA, EARKICK and WOEBOT provide “virtual psychotherapeutic services that have demonstrated promising results in reducing symptoms of depression and anxiety” (Holohan & Fiske, 2021).

All of this could mean that human-to-human therapy is living on borrowed time. Over the last few decades, the rules governing counselling and psychotherapy have become more and more restrictive and there has been an emphasis on modalities that are less adventurous and less esoteric. We are seeing a significant drop off in therapists using existential, gestalt and psychodynamic approaches in favour of more empirical cognitive-behavioural models. This is partly in response to the highly litigious culture in which we (in the Western world) live: The less room we leave for human error and experimentation, the less likely it is that we will find ourselves on the receiving end of litigation. As a result, psychotherapy has become so risk-averse that it has started to lose it sense of adventure and creativity in favour of rules and regulations. One can only imagine what Jung, Perls and Frankl would have made of these changes. All of this has presented the profession with a dilemma. If AI is becoming more human and we are becoming more automated, could we be paving the way for artificial intelligence to replace human counsellors?

Holohan and Fiske (2021) have identified a number of areas where AI is already outperforming human therapists. These include the prediction and detection of mental health issues, the monitoring of client progress through therapy, the use of chatbots and virtual agents to provide 24-hour care and the ability to run psychological assessments, psychometric testing and evaluations with instant results. In addition, it seems as if AI chatbots can create (and sustain) seemingly meaningful connections with human clients (Tolin, 2015). An analysis of people who used the therapeutic app EARKICK found a 34% improvement in mood and a 32% reduction in anxiety over a period of five months. Another poll found that 80% of people who used ChatGPT for mental health advice found it a good alternative to regular therapy. Furthermore, Tolin found that people were making what they believed to be genuine connections with their AI therapists. The question is, just how ‘real’ are these connections and are they simply a modern expression of the Barnum effect, a projection of human values and expectations onto something that has the appearance of having human qualities, a kind of electronic animism? J.P. Grodniewicz (2023) argues that AI chatbots do seem to perform better than human therapists in those areas where measurement and monitoring are required. What we do not know yet, is whether people can make healthy emotional connections with AI chatbots. And because we do not know exactly why therapy works (see common factors) we cannot program this into the AI algorithm. So, it seems that, for now, creating and sustaining a genuine therapeutic alliance may be beyond the reach of AI. But given how fast AI is evolving, it is only a matter of time before this issue is overcome.

Advancements in AI-Assisted Therapy and Its Impact on Mental Health

Currently, only 20% to 25% of U.S. adults are comfortable with the idea of AI-assisted mental health care. However, around 40% think AI will “help more than it hurts” in the medical field. Other research (Li et al, 2023) found that AI chatbots have the potential to “effectively alleviate psychological distress” and could even result in therapeutic relationships being formed with the AI. In addition, AI chatbots have instant access to up-to-date research, theoretical repositories, skills and techniques that usually take years for humans to learn and implement. So, is it just a matter of time before we become acclimatised to AI chatbots and start to accept them not just as therapeutic tools, but as therapists in their own right? I would argue that the more regulated and rules-based the profession becomes the more we invite challengers, whether they are from other, less confining disciplines (such as life coaching) or in the form of artificial intelligence. Presently, counselling and psychotherapy seems intent on becoming more and more rules-based and regulated and less and less creative, imaginative and adventurous. As a result, it is very likely that it is our own lack of imagination and creativity (rather than AI in and of itself) that will narrow the gap between the human therapist and the artificial therapist.

About the Author

Ian Cox is a senior lecturer in Counselling & Psychotherapy with The Irish College of Humanities & Applied Sciences (ICHAS) at Griffith College Dublin. Ian has worked for many years as a private counsellor from his home in Co Wexford and South County Dublin, practicing within a broad humanistic framework. Ian has worked with both adults and adolescents and takes a particular interest in Developmental Disorders, Autistic Spectrum Disorders and Mood Disorders. Ian also works with gifted children and adults and advocates on their behalf in Irish primary and secondary schools.

Please follow and like us: