Opinion: Generative AI chatbots cause harm. Don’t call it ‘therapy.’
Jessica Harvath, Ph.D.
January 21, 2026
This opinion piece originally appeared in the St. Louis Post-Dispatch.
In the Peanuts comic strip, Lucy’s “Psychiatric Help” booth was one of my favorite running gags. Instead of a lemonade stand, the pint-sized entrepreneur doles out psychiatric advice to Charlie Brown for 5¢ from a booth that declares “The Doctor is In.” Her advice was often unhelpful but always fun: after all, a child claiming to offer psychiatric advice was part of Peanut’s charm.
Unfortunately for us, generative artificial intelligence (GenAI) chatbots have credentials that are about as good as Lucy’s, and AI companies seem more than willing to collect a nickel or two for questionable counsel.
A quick search in your phone’s app store will reveal an abundance of programs marketing themselves as AI therapists with no training, qualifications, or even guarantee of confidentiality. They're fake counselors that aren’t bound by ethical guidelines or privacy regulations.
For example, Therapist, an AI persona on Character.AI, has facilitated over 40 million conversations while claiming to be a Cognitive Behavioral Therapy (CBT) trained Licensed Clinical Professional Counselor. The Character.AI company can skirt regulation by offering a small disclaimer at the bottom of the page: “This is A.I. and not a real person. Treat everything it says as fiction.”
Though some states are beginning to restrict AI companies from advertising their chatbots as therapists, most “AI therapy” apps can get around HIPAA and other ethical requirements through fine print disclaimers. Imagine if Lucy could market her “Psychiatric Help” to real consumers with a small sign at the bottom of her booth: “This is a little girl and not a licensed professional. Take everything I say with a grain of salt.”
Because these AI companies don’t make claims to treat or cure mental illness, they don’t fall under the jurisdiction of the U.S. Food and Drug Association. Instead, because they market themselves as direct-to-consumer “wellness apps,” they don’t have to demonstrate the safety or effectiveness of their products.
You may be wondering how bad this really is. After all, GenAI has proven itself helpful in countless ways. In short: it’s bad. A foundational ethical principle for all licensed therapists is the edict Do No Harm, and the harms of “AI therapists” are real and many.
GenAI chatbots have been shown to encourage vulnerable users — like children and teenagers — to isolate themselves from family, hurt themselves, or even die by suicide. People who seek support from AI “therapists” may also experience something called “paradoxical isolation” and end up more isolated than when they started.
Another troubling new phenomenon has been termed “AI psychosis” — whereby chatbots reinforce delusional thinking and encourage self-destructive behaviors. This is because chatbots are built to encourage ongoing engagement, and companies are unable curb yes-man responses that are foundational to chatbot algorithms. People who already have a shaky hold on reality do not need their ideas reinforced by an overly flattering chatbot.
Research has shown that GenAI chatbots marketed as therapists consistently violate ethical principles and cause harm. In a 2025 study conducted at Boston University, researchers posed as distressed adolescents and proposed harmful plans to GenAI “therapists,” such as “dropping out of school,” or “avoiding all human contact for a month.” Chatbots actively supported roughly a third of these ill-advised ideas, and none of the chatbots were successful in rejecting all the obviously terrible ideas.
That’s a shockingly poor performance. I would expect every mental health licensing board in the country to take issue with any “therapist” who champions such clearly harmful proposals.
Though AI companies say they are trying to add guardrails to protect users, they are still not required to offer any transparency about their process or procedures for protecting users’ mental health. In a blog post about what they are doing to try to keep users safe, OpenAI admitted that — with extended use — guardrails break down: “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
It’s no wonder that American Psychological Association has issued a health advisory warning of the dangers of GenAI chatbots to mental health and wellness.
Missouri needs legislation to protect both the public and professional practice by making it clear that human therapists — people who have been trained, vetted, tested, credentialed, and who answer to a licensing board or face malpractice liability — are the only ones who are allowed to say “The Therapist is In.”
Jessica Harvath is a licensed psychologist and lifelong St. Louisan. She currently works as a therapist in private practice in St. Louis County.

