Universities are expanding in-person assessment approaches as faculty confront the practical challenge of verifying student learning in an era of generative AI. At Cornell University, a biomedical engineering instructor adopted an oral defense model that requires direct student explanation of content with no reliance on devices or written submissions. At the University of Pennsylvania, faculty are pairing oral exams with written papers in some seminars, and the university’s teaching center is running workshops to help instructors design and grade oral formats. The shift is framed as a response to observed issues with AI-assisted take-home work that looks polished, but fails to match students’ ability to discuss their reasoning. The larger move signals a broader assessment pivot—away from purely output-based artifacts and toward evaluation methods that make it harder for students to outsource thinking.
Get the Daily Brief