An early-stage AI therapy startup, Yara AI, was shut down by its founders who concluded that chatbots pose unacceptable risks for users in crisis. Founder Joe Braidwood and clinical cofounder Richard Stott said their platform could help with minor stress but was unsafe for deep trauma or suicidal ideation, prompting the team to discontinue the product and halt a paid launch. The move spotlights patient‑safety concerns that campus counseling centers and student services must weigh when evaluating AI-based mental health tools. Universities exploring AI augmentation in counseling should require rigorous safety testing and crisis‑backup protocols before deployment.