The Internet Watch Foundation reported a 260-fold increase in AI-generated child sexual abuse material (CSAM) over one year, warning the surge is “the tip of the iceberg.” The organization said the number of AI-generated CSAM videos detected or reported rose from 13 in the prior year to 3,443 in 2025. Researchers and advocates linked the escalation to generative AI tools that are faster, cheaper, and easier for bad actors to access. Thorn, a nonprofit that builds technology to combat online child exploitation, identified patterns including re-victimization of historical abuse survivors and the weaponization of innocent images. For higher education, the report raises immediate risk-management and compliance questions: campuses increasingly face AI-enabled threats across digital learning platforms, investigation workflows, and content moderation systems. The key operational challenge is that investigator and platform reporting systems can become overwhelmed as automated generation and personalization reduce friction for offenders.