A Notre Dame first-year student’s AI agent pitch to the undergraduate population prompted rapid administrative action, including deletion of the email offer and account disablement, as the university investigated whether the tool constituted AI cheating. The student, Caden Chuang, said the system is intended as a productivity and study-support tool rather than a cheat code. The episode highlights an enforcement challenge emerging across campuses: students are building AI systems that connect directly to learning platforms like Canvas, generating study guides, notes, drafts, and reminder prompts. Within the case, students signed up quickly before the university pulled the offer, and more than 1,000 participants reportedly enrolled. Criticism from faculty underscored concerns that these tools may erode learning-by-doing and reduce value in in-class instruction. The university’s investigation and the student’s claims are likely to feed renewed guidance for acceptable AI use in learning management systems, as institutions attempt to draw boundaries between study assistance and academic misconduct. Campuses are watching whether policies shift toward clearer definitions of “productivity” versus “drafting/exporting completed work,” and how investigations handle tools that partially monetize features for users.