A new analysis argues that student evaluations of teaching are structurally unreliable and can erode academic integrity by rewarding leniency rather than learning. The article notes that at many institutions, student evaluations can represent 70% to 100% of teaching performance assessment, despite research showing little to no correlation between ratings and how much students learn. It cites evidence from studies and meta-analyses suggesting evaluation scores are affected by factors unrelated to instruction quality, including instructor gender, course difficulty, and the amount of academic challenge provided. This can produce perverse incentives: faculty who assign more rigorous work may receive lower ratings even when students perform better later. The report concludes that if promotion and evaluation systems prioritize student satisfaction over validated learning outcomes, institutions risk signaling that rigor is a professional liability. For higher ed leaders, it reframes the challenge from “better surveys” to changing incentives that shape course design, grading standards, and institutional culture.