Cornell University is expanding a shift toward oral exams designed to make it harder for generative AI to replace learning evidence. In Chris Schaffer’s biomedical engineering class, students deliver an “oral defense” with no laptops, no chatbots, and no paper—requiring direct explanation to an instructor. Faculty leaders say the approach is responding to a pattern of take-home written assignments returning “perfect” outputs that students cannot subsequently defend in person. At the University of Pennsylvania, instructors also pair oral exams with written papers, framing the tests as skill recovery rather than merely cheating prevention. The article describes a growing workshop culture around oral assessments as institutions try to preserve critical thinking and cognitive capacity while maintaining instructional rigor in AI-saturated coursework. The immediate impact is practical: faculty need training and clear rubrics, and students face new exam dynamics that change study strategies in generative AI classrooms.
Get the Daily Brief