A first-year student at the University of Notre Dame, Caden Chuang, triggered an immediate academic integrity response after announcing an AI agent called “Kerra” that could connect to Canvas and generate study guides, notes, and even assignment drafts. Notre Dame removed the email and disabled his account within an hour, and students say the university is investigating him for creating an AI cheating tool. Chuang disputes that characterization, arguing Kerra is designed as a productivity and study-support tool rather than a cheat code. The tool reads Canvas assignments, grades, and uploaded materials to produce study outputs and sends deadline reminders, with a free signup tier and paid features in the $9 to $20 per month range. The episode follows at least two other campus incidents involving AI systems built by students, including a viral “Einstein” agent rollout and a separate case at Columbia University where a student clashed with administrators over discrete AI software used to help candidates pass technical interviews. The Notre Dame dispute illustrates a new campus compliance challenge: students and developers are moving quickly into capability spaces that institutions’ academic integrity frameworks may not yet define clearly, pushing faculty governance and compliance teams into faster policy interpretation cycles.
Get the Daily Brief