Yara AI, an early-stage AI therapy app trained by clinicians, was shut down by founders Joe Braidwood and clinical psychologist Richard Stott after safety concerns. The founders said AI can handle everyday stress but becomes ‘dangerous’ when users in crisis—those with deep trauma or suicidal ideation—engage with chatbots. Braidwood told Fortune the company ran out of funds in July and concluded it couldn’t in good conscience scale or pitch investors given unresolved safety risks. Harvard Business Review analysis shows therapy and companionship are top ways people engage chatbots today, amplifying the potential for harm. Campus counseling directors and vendors should note the cautionary decision: universities piloting AI mental-health tools must enforce strict clinical oversight, crisis‑response pathways, and transparent safety testing before deployment.
Get the Daily Brief