Voices across higher education are warning that the greatest risk from artificial intelligence isn’t student cheating but the gradual erosion of learning as institutions outsource cognitive work to models. Universities report rising, embedded uses of AI—from administrative triage to teaching and research—creating novel governance and assessment questions. At the same time, scholars and editors say peer review is breaking under scale: submission volumes, reviewer shortages and faster publishing cycles are straining quality control. That stress is heightened by AI-generated manuscripts, automated literature scans, and compressed timelines for grant and journal review. Trustees, provosts, and research offices must now weigh policies on acceptable AI uses, invest in reviewer capacity and redesign assessment to protect learning outcomes and research integrity. The twin pressures on classroom pedagogy and scholarly publishing will force institutions to recalibrate incentives, oversight and skills training.