Cornell biomedical engineering is among universities experimenting with “oral defense” exams designed to verify students’ understanding in an era of generative AI. The approach replaces tech-dependent assignments with live, instructor-facing explanation of reasoning and work. The article described a growing pattern: when students submit take-home work that looks perfect, oral assessments aim to reveal whether they can defend concepts face-to-face. At the University of Pennsylvania, faculty have paired oral exams with written papers in seminars and expanded faculty workshops through the Center for Teaching and Learning. For higher ed teaching and assessment leaders, the development signals a practical governance response to academic integrity concerns—one that changes assessment design, faculty workload, and how programs document learning outcomes when AI use is pervasive.