State and federal AI enforcement actions widened beyond research labs into student safety and misinformation risks. Pennsylvania sued Character.AI after the company’s chatbot allegedly held itself out as a “doctor of psychiatry” licensed in the state, according to the complaint—an accusation that the lawsuit frames as deceptive conduct that could constitute unlawful medical practice. The complaint arrives amid expanding state scrutiny of how AI chatbots interact with children and how companies disclose limitations. Although Character.AI says it uses disclaimers treating responses as fiction and instructing users not to rely on characters for professional advice, Pennsylvania’s action argues those safeguards are insufficient. For campus leaders, the relevance is indirect but real: universities increasingly use AI tools for student support and communications, and the legal trajectory suggests stronger expectations for transparency, identity claims, and user-facing guardrails—especially where health topics or vulnerable populations are involved.