There is an epidemic of loneliness in this country that has been identified as a genuine health crisis. To take it on, we need an army of therapists that currently doesn’t exist.
In these desperate times, some are looking to artificial intelligence for solutions, using large language model-trained chatbots as simulated patients to train therapists — or even using AI chatbots as therapists.
Naturally, there are a lot of ethical implications to consider. Nir Eisikovits joined GBH’s All Things Considered host Arun Rath to talk it through. Eisikovits is the director of the Applied Ethics Center at UMass Boston, research partner at the Institute for Ethics and Emerging Technologies and co-author of a paper examining the use of chatbots in psychotherapy. What follows is a lightly edited transcript.
Arun Rath: First off, tell us about the extent to which AI is already being used in therapy. Is it being used in training yet?
Nir Eisikovits: Well, Arun, it’s early days, since the large language models and generative AI are not even with us on the scale that we’ve all been hearing about, probably not even for two years yet. But that said, yes, it is being used. And there’s an important distinction.
It’s being used in an adjunctive capacity or assistive capacity as a simulator to train therapists with chatbot patients. It’s being used to take medical histories on intakes. Sometimes as a diagnostic, early-warning tool that combines data from wearables and voice analysis and speech pattern analysis to give us signs that people are starting to get in trouble. It’s being used as a scheduling tool or even as a medical follow-up tool. All of those are potentially promising and useful assistive tools.
What’s more interesting — and, as you suggest, a lot more problematic — is using the chatbots as therapists, or instead of therapists. That’s on the radar, too. There are applications that increasingly are doing that as well.
Not all of them claim therapeutic benefits, but there are applications that, practically speaking, do that.
Rath: Talk about where we are with that right now. Because, even before the current AI breakthroughs, there have been simulators — computer programs or apps — that do this sort of therapeutic question-asking. How are these newer ones distinguished?
Eisikovits: Right. Well, there’s the sort of classical, older ‘ELIZA’ tool, which would sort of turn questions and statements back to the users — almost a parody of psychoanalysis. People got plenty attached to those as well.
But this is no ELIZA. These are large language model-based chatbots — often trained on therapeutic transcripts and all the other relevant textual information that these chatbots are trained with — that, at the most extreme end of the scale, can have a conversation that is not easy to distinguish from a human conversation.
There are a couple of FDA-approved applications that are mainly for cognitive behavioral protocols. But then there are a lot more of these chatbots that don’t claim — so they don’t have to be regulated — that they [have a] therapeutic benefit, but they are actual chatbot therapists. You can download an app and use it and ask it questions, ask it advice and have the kind of intimate conversation.
“My initial sense is [that] a chatbot can sound like it can have a real relationship with you, but that’s different from actually having it.”Nir Eisikovits, director of UMass Boston’s Applied Ethics Center
In terms of some of the moral questions this raises: the assistive uses — like scheduling, simulators, early warning — if those are, in principle, OK? If privacy issues and quality of the training data and algorithmic bias issues and all those kinds of things are addressed? But what about having a relationship with a chatbot? What about having a chatbot as your therapist?
For my money, the main question is: look — having a therapist is having an actual, close, important, significant relationship. These relationships turn on something that’s sometimes referred to as a therapeutic alliance, [which is] this idea that the therapist and the patient together figure out what [they are] supposed to achieve, and they can have a relationship of mutual recognition and empathy. Can you have a therapeutic alliance with a chatbot?
My initial sense is [that] a chatbot can sound like it can have a real relationship with you, but that’s different from actually having it. As an example: for all of us, it means something when a therapist, or even a close friend for that matter, says, “I’ve been worried about you,” or “I’ve been thinking about you as you’ve been going through X, Y and Z.” That’s moving for us in some basic way.
Now, a chatbot can say it, but it can’t mean it. The issue becomes: well, fine, but what if the patient feels that they get the benefit anyway? There are people who report that, in these interactions with the chatbot, they feel that the chatbot cares more about them than their family does [or] cares more about them than a flesh-and-blood therapist.
Is that enough? Is the subjective report that a patient can give: “This feels good to me. This feels like a relationship to me. This feels like [the chatbot] cares about me.” Is that enough?
Rath: If we go ahead and stipulate that chatbot therapy is not ideal, it’s not as good as a human therapist, is there still an argument, though — given what we talked about in terms of the epidemic of loneliness — that it’s better than no therapist at all? Is it something we should maybe be reaching for, given that we don’t have enough therapists?
Eisikovits: Yeah, you know, you raise an absolutely crucial point. Like you said, the epidemic of loneliness, the lingering after-effects of COVID, often particularly for young people, often particularly for young girls. Is this better than having no therapist? It’s easy to take a sort of elitist, upper-middle-class view that says, “Human therapy or bust.”
Is this better than nothing? Short term, it probably is. But none of these interactions are short term. As you are subjecting people to this, as people are participating, you’re also training people about what it means to have a relationship.
You’re training people out of this idea that having any meaningful relationship is hard and it involves friction. It involves not always being satisfied and not always having somebody available to talk to you and sometimes being disappointed. These things don’t happen with chatbots. They’re always there. They can always sound like they’re empathetic. They can always sound like they care about you and only you.
“You’re training people out of this idea that having any meaningful relationship is hard and it involves friction. ... [Chatbots are] always there. They can always sound like they’re empathetic.”Nir Eisikovits, director of UMass Boston’s Applied Ethics Center
You do have to ask yourself in this cost-benefit analysis: What kind of psychological habits are we instilling in people about what it means to have a relationship? How important is it that our relationships be real? If you have a chatbot therapist, why not have a chatbot best friend who is nicer than your best friend? Why not have a chatbot romantic partner? By the way — lots of people are having them. Why not replace all the important relationships in our lives if it actually feels better?
If we accept these as solutions that are better than nothing — which, in some ways, you’re right, they are — we are also giving people, providing people, training people with a whole mindset of habits that I’m far from sure will serve them well.