Anthropic acknowledged that it is testing a new, more capable AI model—after an unsecured data cache accidentally exposed internal details, including draft blog material and references to the model’s existence. The leak also surfaced an invite-only CEO summit plan. Cybersecurity researchers tied to the incident said descriptions were stored in a publicly searchable cache by default due to a CMS configuration error. After being notified, Anthropic removed public access to retrieve the documents. The episode highlights a new risk surface for universities and research institutions adopting frontier AI tools: operational security can fail even when model work is “in progress,” turning private development artifacts into public intelligence.