Google disclosed that it disrupted a criminal group attempting to exploit a previously unknown (“zero-day”) vulnerability in a popular online administration tool—while the attackers used AI to support the operation. The incident adds to evidence that AI is moving from theoretical risk to operational cybersecurity threat models. Google’s threat intelligence chief John Hultquist warned the moment cybersecurity experts have anticipated is now here: “AI-driven vulnerability and exploitation” is already unfolding. The company said it notified the affected organization and law enforcement and stopped the plan before damage occurred. The disclosure also arrives amid shifting government approaches to AI oversight and model vetting, with debate continuing over whether and how the federal government should regulate AI tools used in cyberattacks. For colleges and universities, the immediate compliance and security implication is practical: more campus systems rely on AI-accelerated services and networked devices, increasing the need for zero-day readiness, monitoring, and tighter identity controls around administrative tools and privileged access.