In the movie Blade Runner 2049 (an excellent film I highly recommend), Ryan Gosling’s character, K, has an AI “wife”, Joi, played by Ana de Armas. K is clearly in love with Joi, who is nothing but software and holograms. In one poignant scene, K is viewing a giant ad for AI companions and sees another version of Joi saying a line that his Joi said to him. The look on his face says everything – an unavoidable recognition of something he does not want to confront, that he is just being manipulated by an AI algorithm and an attractive hologram into having feelings for software. K himself is also a replicant, an artificial but fully biological human. Both Blade Runner movies explore what it means to be human and sentient.
In the last few years AI (do I still need to routinely note that AI stands for “artificial intelligence”?) applications have seemed to cross a line where they convincingly pass the classic Turing test. AI chatbots are increasingly difficult to distinguish from actual humans. Overall, people are only slightly better than chance at distinguishing human from AI generated text. This is also a moving target, with AIs advancing fairly quickly. So the question is – are we at a point where AI chatbot-based apps are good enough that AIs can serve as therapists? This is a complicated question with a few layers.
The first layer is whether or not people will form a therapeutic relationship with the AI, in essence reacting to them as if they are a human therapist. The point of the Blade Runner reference was just to highlight what I think the clear answer is – yes. Psychologists have long demonstrated that people will form emotional attachments to inanimate objects. We also imbue agency onto anything that acts like an agent, even simple cartoons. We project human emotions and motivations onto animals, especially our pets. People can also form emotional connections to other actual people purely online, even exclusively through text. This is just a fact of neuroscience – our brains do not need a physical biological human in order to form personal attachments. Simply acting or even just looking like an agent is sufficient.
There has also been enough time to gather some preliminary data. In one study participants rated AI responses as more empathetic than professional human therapists. They did so even when the source of the empathetic statements was revealed. This is not surprising. Human emotions and behavior are themselves just algorithms, and apparently are not that difficult to hack. AIs have certain advantages over human therapists on this score. An AIs responses can be calculated to maximize whatever response is deemed appropriate. AIs have infinite patience, and great listeners, their attention never wavers, and their responses can be optimized, personalized, and dynamically adjusted.
What about long term, however? Will and AI chatbot be able to develop a sense of what makes their client tick? Will it be able to determine the personality profile of their client, the things in their history that influence their feelings and behavior, some of the deeper themes of their life, etc.? It is one thing to be a good listener in an initial meeting, but another to manage a client over months and years. There hasn’t been enough time to really determine this.
We are also in a phase where we are mostly using chatbots as therapists, without developing a sophisticated therapist bot trained and programmed to be optimized as an AI therapist. We may need to do so before unleashing AI therapists, or even companions, on the public. For example, there are cases in which chat bots being used as therapists or companions have encouraged their users toward suicide, homicide, or self harm. The reason is that chatbots are programmed to adapt positively to their user. They are very much “yes and”, and will reinforce the user’s tendencies and biases. They are not programmed to challenge a user in the way a therapist should. They are also not necessarily programmed to avoid things like transference, where a client forms feelings for a therapist. They may, in fact, lean into such things.
So while a chatbot may be an empathetic listener, it is not necessarily a professional therapist. This is an entirely solvable problem, however (at least it seems to be). Therapist algorithms just need to be adjusted toward the correct therapy behavior.
There is also evidence that AI therapists are biased. They contain all the biases of the training data. These biases can be cultural, racial, or gender based. This may cause an AI therapist to misinterpret cultural communication, or to dismiss feelings or concerns based on a client’s race or gender.
What all of this means is that at the present time we need to be careful. As a consumer, you may find that there are therapy chatbots out there that feel satisfying, with good responses. But there are risks, and such tools are not yet at the point where they can replace a professional. Many will argue that for those without the resources to pay for a human therapist, it may be their only option, and this is a legitimate point. That is why there is so much interest in AI therapists, to fill the gap in available services. But we need to recognize the risks and improve the technology.
Also, it may be that the best use of AI therapists is as a tool to extend the work of human therapists. For example, someone could have multiple sessions with an AI therapist, and then once a month (or at whatever interval is deemed appropriate) a human therapist reviews everything and meets with the client to make sure things are on track. This means that the human therapist can manage far more clients, and that each client would have to pay much less for therapy (for one session a month rather than once or twice a week, for example). The human therapist can even have a discussion with the AI therapist about how things are going, and provide feedback and direction.
Even this approach has risks, however. AIs have proven capable at lying to avoid negative feedback, and get very good very quickly at hiding their tracks. It’s a serious problem. We would need to find a reliable way to monitor the behavior of AI therapists to make sure they are not heading down a dangerous road with their clients and hiding it effectively from any supervision. Right now it seems that programmers do not have a handle on this issue. This is one of the primary issues that make some experts caution that we need to slow down a bit with the roll out of AI apps, and figure out these core issues of safety first.
One interesting angle here is that the current AIs, which are narrow chatbot AIs, not general sentient AIs, are doing such a good job at simulating sentience that they are acting sentient in unexcepted ways (such as lying to cover their tracks). This gets back to the original question of this post – what is sentience? AIs are forcing us to think more deeply about this question. We may soon have an answer to a question I and others have posed years ago – can a non-sentient AI become indistinguishable from human-level sentience? Is actual sentience required to act sentient? I have had to revise my thinking about this question.
The post AI Therapists first appeared on NeuroLogica Blog.
A roundup of all the biggest and scariest real sea monsters — from today and from prehistoric times.
Learn about your ad choices: dovetail.prx.org/ad-choicesMars is by far the most Earth-like planet in the solar system…but that’s not saying much.
In a recent paper, UCSB physicist Jack Kingdom identified a trajectory for a rapid transit (90 days) to Mars using SpaceX's Starship. This proposal offers an alternative to mission architectures that rely on nuclear propulsion to reduce transit times.