Educators are adjusting assessment practices as generative AI makes some take-home work easier to reproduce, with growing interest in oral exams and in-person defenses. At Cornell, a biomedical engineering professor introduced “oral defense” exams that require students to explain their work directly because “you won’t be able to AI your way through an oral exam.” Across institutions, the shift often pairs oral formats with written work to reduce cheating incentives while still testing understanding. At the University of Pennsylvania, instructors have expanded oral exams and coupled them with faculty workshops on designing them. The same pressure is shaping how AI tutors are marketed, evaluated, and governed in K-12 and higher education environments—driving a new focus on evidence, bias checks, and oversight capabilities rather than deployment alone.
Get the Daily Brief