Two separate pieces of analysis highlight how generative AI is colliding with academic integrity systems—and creating new legal exposure for faculty and institutions. One report notes that, because AI detectors can be unreliable, student challenges and lawsuits increasingly question AI-related dishonesty findings. Another analysis focuses on the evolving legal framework faculty face when deciding whether suspected AI misuse should be treated as grading issues or academic misconduct. The legal risk discussion points to court cases involving AI-generated work and procedural due process, including Minnesota’s expulsion dispute involving a Ph.D. student whose exam was compared with ChatGPT outputs and a separate New York case that required expunging an AI-related misconduct finding. The “state of the law” framing emphasizes the need for notice, documentation, and a defensible record. For campuses, the practical outcome is tighter scrutiny of faculty guidance on AI use in coursework, the need for consistent rules on student submissions, and administrative alignment on how misconduct standards are applied when detectors are uncertain.