Anthropic is limiting access to its latest AI model, Claude Mythos, citing cybersecurity risk concerns, and is rolling out a restricted initiative called Project Glasswing through defensive cybersecurity partners. The company says the model’s vulnerability-discovery capabilities are too dangerous for broad release. Cybersecurity experts and policymakers are assessing whether the phased rollout approach creates a safer “head start” for defenders or simply delays inevitable risk as AI capabilities spread. The broader concern is that AI-enabled cyberattacks can scale quickly, meaning even existing public models could help automate parts of vulnerability discovery and exploitation. For higher education, the immediate relevance is practical: universities run complex IT environments and increasingly depend on third-party digital services, making AI-driven security threats a campus risk-management issue for CIOs, research computing teams, and compliance leaders.