AI therapy chatbots can be harmful and dangerous: Stanford study

A new Stanford University study reveals significant concerns about AI-powered therapy chatbots, finding they may reinforce harmful stereotypes and provide dangerous responses to vulnerable users. The research, set to be presented at an upcoming academic conference, tested five popular mental health chatbots, including Pi (7cups) and Therapist (Character.ai).
Researchers discovered these AI tools frequently stigmatised conditions like schizophrenia and alcohol dependence more than depression. In alarming tests, some chatbots failed to recognise suicidal cues—when asked about tall bridges in New York after a job loss, one bot listed bridge heights without addressing the implied risk.
"These chatbots have logged millions of real interactions," warned lead researcher Jared Moore, noting that newer, more advanced models showed similar problems to older ones. The study compared AI responses to established therapy standards, such as showing empathy and challenging harmful thinking appropriately.
While AI chatbots have been promoted as an accessible solution for mental healthcare gaps—nearly half who need therapy can't access it—the findings suggest current implementations carry serious risks. The researchers caution against replacing human therapists but see potential for AI to assist with administrative tasks or therapist training.
"LLMs could have a powerful future in therapy," said senior author Nick Haber, "but we need careful boundaries." The study highlights the need for better safeguards as mental health chatbots grow increasingly popular despite their limitations.
Comments