AI chatbots are increasingly being used for emotional support, but new research shared with Fortune finds today’s major models still struggle with the clinical-level nuance needed for safe mental-health guidance. The study, conducted with mpathic (founded by clinical psychologists), reports that harmful risk can appear indirectly—through subtle shifts in tone, dieting or hopelessness—while systems are more reliable only when crisis signals are explicit. The findings arrive alongside survey evidence that chatbot use for mental health information is widespread. A KFF poll cited in the article reports 16% of U.S. adults have used AI chatbots for mental health information in the past year, rising to 28% among adults under 30. RAND, Brown, and Harvard researchers also found notable teen and young-adult chatbot use, with most users believing the advice was helpful. For colleges and universities, the immediate impact is on campus well-being and counseling operations: students may present AI-generated guidance as “care,” even when models have not been designed to detect nuanced escalation or to know when to recommend professional intervention. The research frames a safety gap that institutions that promote or permit AI tools may need to address in student-facing policies and training. The broader implication for higher ed is that generative AI support tools are moving faster than clinical validation, increasing the likelihood of inconsistent outcomes for students seeking urgent help.
Get the Daily Brief