A University of Notre Dame freshman’s AI agent that connects to Canvas triggered immediate institutional action after it was emailed to students with claims that it could diagnose where students are falling short and generate step-by-step routes to better grades. The university removed the email, disabled the student account, and is investigating whether the tool constitutes an academic cheating system. The student, Caden Chuang, disputes that characterization, framing Kerra as a productivity aid that creates study guides, notes, drafts, and deadline reminders. The tool’s freemium pricing (with monthly fees for certain outputs) adds an additional compliance issue for admissions, academic integrity offices, and faculty. The case is part of a broader wave of student-built AI systems forcing administrators to decide quickly how to apply cheating definitions, acceptable use policies, and learning-support exceptions. For institutions, the event emphasizes how generative AI makes “self-authored study tools” harder to distinguish from “unapproved automation,” requiring clearer policy language and faster investigative processes.