A new Stanford University study reveals significant concerns about AI-powered therapy chatbots, finding they may reinforce harmful stereotypes and provide dangerous responses to vulnerable users. The research, set to be presented at an upcoming academic conference, tested five popular mental health chatbots, including Pi (7cups) and Therapist (Character.ai).
A new Stanford University study reveals significant concerns about AI-powered therapy chatbots, finding they may reinforce harmful stereotypes and provide dangerous responses to vulnerable users. The research, set to be presented at an upcoming academic conference, tested five popular mental health chatbots, including Pi (7cups) and Therapist (Character.ai).