The University of Sydney rolled out a menu‑style set of assignment options that permit AI use for most out‑of‑class work while preserving proctored, human‑assessed in‑class tasks—a redesign aimed at restoring assessment validity in the era of generative AI. The model reframes assessment as a mix of trusted in‑person evaluation and AI‑enabled take‑home work. Cornell University launched a discipline‑independent module to build students’ critical thinking skills and provide faculty a framework to integrate those competencies across curricula. The module is presented as a scalable response to AI’s effect on learning by strengthening higher‑order analytic and reasoning tasks that AI cannot easily replicate. Both initiatives offer practical examples for institutions seeking to protect learning outcomes while integrating new tools. Provosts and assessment officers should expect more course redesigns that combine AI policy, academic‑integrity controls and explicit skill scaffolding.
Get the Daily Brief