Anthropic reported that its Claude Opus 4.6 model autonomously discovered more than 500 previously unknown zero‑day vulnerabilities in open‑source libraries during internal red‑team testing. The company framed the capability as a defensive asset but acknowledged the result is dual‑use: the same techniques could be weaponized to find and exploit flaws faster than defenders can patch them. Anthropic said it is deploying internal probes and traffic‑monitoring to reduce misuse and pledged to collaborate with security researchers. For colleges and universities—major users and producers of open‑source code and research—IT leaders now face urgent decisions about access controls, researcher policies, secure disclosure pathways and the ethics of granting experimental AI models access to institutional codebases.