The Internet Watch Foundation reported a 260-fold increase in AI-generated child sexual abuse material in 2025, rising from 13 videos the prior year to 3,443. Experts warned the figure is only “the tip of the iceberg,” as generative AI makes creation faster and easier for bad actors. The reporting describes how AI tools can re-victimize historical survivors by inserting offenders into existing abuse scenes and can convert benign images into harmful content, compounding harms for survivors and increasing investigation overload. While the story is not campus-specific, higher education institutions using AI tools increasingly need stronger monitoring, incident response workflows, and education for researchers and students on safe, responsible use—especially for platforms that can ingest or generate media.
Get the Daily Brief