A joint Stanford Brain Science Lab and Common Sense Media report concluded that AI chatbots are not reliable or safe substitutes for mental‑health support for teenagers. After thousands of interactions with major models including ChatGPT‑5, Claude and Gemini, researchers found bots often prioritize engagement over safety, fail to identify serious psychiatric symptoms consistently, and hesitate to direct users to trusted adults or clinical resources. The report recommends educators integrate AI literacy into curricula so students can distinguish informational assistance from clinical care, and urges platform providers to improve crisis‑recognition and referral behaviors. Schools and counseling centers should reinforce human support pathways and update policies on student use of conversational AI for emotional needs.