A multi‑university research team found that LLMs such as ChatGPT can manifest increased bias and erratic outputs when exposed to disturbing inputs — a state the authors labeled “anxiety” — and that prompt-based “mindfulness” interventions reduced those effects. The paper, from Yale, Haifa, Zurich and Zurich Psychiatry Hospital researchers, suggests that simple calibration prompts can make models respond more objectively to users after traumatic stimuli. For campus researchers and counseling services that experiment with LLMs, the study signals two operational priorities: build safety‑first prompting protocols for mental‑health applications, and require human‑in‑the‑loop oversight for sensitive use cases. Institutional review boards, counseling centers and AI governance committees should coordinate on standards for deploying LLMs in student-facing interventions and for research involving traumatic or violent content.