top of page

"Inspiring Minds, Changing Lives."

Pink Poppy Flowers

Top seller

Join Us – Become a Member Today!

Individual
Professional
Business

Why ChatGPT Shouldn’t Be Your Therapist

ree

Artificial intelligence chatbots don’t judge. Tell them the most private, vulnerable details of your life, and most of them will validate you and may even provide advice. This has resulted in many people turning to applications such as OpenAI’s ChatGPT for life guidance.


But AI “therapy” comes with significant risks—in late July OpenAI CEO Sam Altman warned ChatGPT users against using the chatbot as a “therapist” because of privacy concerns. The American Psychological Association (APA) has called on the Federal Trade Commission to investigate “deceptive practices” that the APA claims AI chatbot companies are using by “passing themselves off as trained mental health providers,” citing two ongoing lawsuits in which parents have alleged harm brought to their children by a chatbot.


“What stands out to me is just how humanlike it sounds,” says C. Vaile Wright, a licensed psychologist and senior director of the APA’s Office of Health Care Innovation, which focuses on the safe and effective use of technology in mental health care. “The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole.”


Scientific American spoke with Wright about how AI chatbots used for therapy could potentially be dangerous and whether it’s possible to engineer one that is reliably both helpful and safe.

[An edited transcript of the interview follows.]


What have you seen happening with AI in the mental health care world in the past few years?

I think we’ve seen kind of two major trends. One is AI products geared toward providers, and those are primarily administrative tools to help you with your therapy notes and your claims.

The other major trend is [people seeking help from] direct-to-consumer chatbots. And not all chatbots are the same, right? You have some chatbots that are developed specifically to provide emotional support to individuals, and that’s how they’re marketed. Then you have these more generalist chatbot offerings [such as ChatGPT] that were not designed for mental health purposes but that we know are being used for that purpose.


What concerns do you have about this trend?

We have a lot of concern when individuals use chatbots [as if they were a therapist]. Not only were these not designed to address mental health or emotional support; they’re actually being coded in a way to keep you on the platform for as long as possible because that’s the business model. And the way that they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy.


Comments


bottom of page