Yara AI’s founders discontinued their mental‑health chatbot product and cancelled a paid launch, saying the technology posed safety risks for vulnerable users. CEO Joe Braidwood and clinical co‑founder Richard Stott concluded the platform could be 'dangerous' in crisis situations and that current safeguards were insufficient. The shutdown underscores limits of deploying AI for clinical or high‑risk counseling without rigorous clinical trials, liability frameworks and escalation pathways to human clinicians. Campus counseling centers and student‑affairs teams experimenting with AI triage tools should note the risks of false reassurance and failure to detect suicidality. Universities integrating AI for mental‑health support must establish clinical oversight, data‑use agreements, and clear handoffs to licensed care. Regulators may increase scrutiny of AI products marketed for therapy.
Get the Daily Brief