As generative AI reshapes coursework, universities are tightening how they verify learning, with more faculty shifting toward oral examinations and in-person defenses that no chatbot can easily replicate. Cornell’s biomedical engineering professor Chris Schaffer has added oral defenses, while University of Pennsylvania instructors increasingly pair in-class explanations with written work to test whether students actually understand their submissions. The assessments come amid evidence that take-home writing is producing “perfect” outputs while students struggle to explain concepts when asked directly. At Penn, the Center for Teaching and Learning has been organizing faculty workshops on oral exams, reflecting a broader push to move away from purely written evaluations. The development intensifies a near-term governance question for higher education: how to design assessment policies that deter misuse without undermining academic trust or instructional objectives. It also increases the workload and scheduling demands on faculty and teaching staff during peak grading periods.