Higher education’s AI-cheating landscape continues to shape how instructors design assignments, with renewed focus on creating assessment formats that discourage opportunistic use of generative tools. The report centers on “the best defense” framing for educators, pushing attention toward classroom strategies rather than relying on one-off detection tools. The emphasis is on designing learning activities so that students must demonstrate understanding in ways that are hard to replace with surface-level AI outputs. As generative AI capabilities become more accessible, the core operational question for campuses is how to align evaluation with learning goals while maintaining fairness and academic integrity. The piece reinforces that assessment design can shift incentives—making learning evidence more difficult to outsource and easier for faculty to interpret consistently.