A growing number of professors are using oral exams to reduce the advantages of generative AI in take-home assignments and to verify what students actually know. Reporting from Cornell University describes a biomedical engineering “oral defense” model requiring students to speak directly to an instructor without laptops, chatbots, or other tools. The article links this move to a broader assessment shift at the University of Pennsylvania, where faculty pair oral exams with written papers to address skill loss and AI-enabled automation concerns. The piece notes that institutions are building support through faculty workshops on oral assessments and that the approach is prompting new questions about how to balance verification with grading fairness. For faculty governance and teaching-and-learning offices, the developments suggest oral formats are becoming part of the toolkit for AI-era integrity strategies.