The Internet Watch Foundation (IWF) reported a 260-fold increase in AI-generated child sexual abuse material (CSAM) detected in 2025, rising from 13 videos to 3,443. Researchers warn this is only an early indicator of how generative AI is changing offender workflows and increasing harm to both new victims and long-term survivors. The nonprofit Thorn says the surge is tied to AI tools becoming faster, cheaper, and more accessible, enabling offenders to weaponize benign images and re-victimize individuals whose images have circulated for years. Thorn highlights a process of inserting offenders into existing abuse scenes using AI personalization. IWF and partners also warn investigators are overwhelmed by the scale and speed of new uploads—forcing enforcement and reporting ecosystems to operate at higher throughput. The development intensifies policy and compliance demands for platforms, reporting hotlines, and the safeguarding responsibilities tied to AI deployment. For universities conducting AI research or training students in data/ML pipelines, the story raises immediate expectations around responsible development, monitoring, and incident response training.
Get the Daily Brief