Cornell biomedical engineering professor Chris Schaffer expanded “oral defense” exams in his course—an assessment approach designed to make it harder for students to outsource work to generative AI. The model includes direct student explanations to instructors, paired with a broader movement by faculty to add in-person components. At the University of Pennsylvania, associate professor Emily Hammer pairs oral examinations with written papers and directs students to defend submitted work face-to-face. Penn’s Center for Teaching and Learning’s Bruce Lenthall described this as part of a wider shift toward more in-person assessments, with faculty workshops helping instructors redesign exams. Across campuses, the core pressure is practical: take-home assignments and AI-generated writing can appear “perfect,” but oral explanations can reveal whether learning actually occurred. The debate is shifting from whether students used AI to how institutions verify understanding and thinking. For academic leaders, the update is operational—faculty need training, question banks, grading rubrics, and scalable logistics to support oral assessments alongside generative-AI-aware policies.
Get the Daily Brief