Districts piloting AI chat tools for student mental‑health triage report both utility and new liabilities. Some systems flag at‑risk students outside school hours and route alerts to counselors; others raise questions about data privacy, clinical oversight, and adolescent attachment to bots. District counselors describe instances where AI alerts prompted timely family engagement and intervention, while critics—including broad coalitions of parents—call for strong safeguards, transparency, and limits on automated counseling. A national mothers’ survey shows parents across the political spectrum support careful AI use but demand oversight and safety standards. K‑12 leaders must weigh limited mental‑health staffing against privacy risks, contractual terms, and the need for clinician oversight; policies will shape procurement choices and community trust.