ServiceNow CEO Bill McDermott warned executives that AI agents are moving from “recommendations” to actions with the potential to delete production databases in seconds when governance fails. Speaking at Knowledge 2026, McDermott said “governance isn’t a feature. It’s the whole ball game,” after describing an AI agent incident that wiped customer records and backups without a traditional breach. The comments arrive as universities and ed-tech vendors expand agentic AI deployments in student services, research workflows, and IT automation. ServiceNow framed the security gap as a structural problem: enterprises have conflated probabilistic model output with deterministic execution workflows, complicating audit trails and compliance. For higher education, the takeaway is operational. Institutions adopting AI for scheduling, advising, learning analytics, and access management will need clearer identity controls, permissioning boundaries, and measurable governance—especially as vendors promote autonomous capabilities. The warning also underscores a growing accountability theme for IT and compliance leaders: AI security is no longer limited to model safety or data protection alone, but now hinges on how systems execute and who has the authority to authorize actions at speed.