Cybersecurity leaders in Washington warned that advanced AI systems are moving faster than enterprise security practices can keep up. A cross-sector group of AI security practitioners and standards-setters discussed how models can shift vulnerability discovery toward attackers—often faster than developers can respond. The concerns sharpened after Anthropic’s Mythos model surfaced in security-focused reporting, including reports that a small set of unauthorized users gained access quickly. Conference participants said that when AI tools can discover vulnerabilities at scale, it reduces the margin for error and forces organizations to formalize AI-specific threat measurement and incident response. Presenters also flagged the risk that security frameworks lag behind. Several organizations are using overlapping standards and guidance, but attendees said there remains little agreement on what “secure AI” measurement looks like in practice—creating uncertainty for banks and other high-compliance sectors. For higher education, the message is direct: campuses adopting AI tools for teaching, research, and administration are expanding their attack surface while operational safeguards often remain oriented around traditional software risk.