OpenAI reached a classified agreement to provide its models for Pentagon systems just hours after the Defense Department publicly designated rival Anthropic a “supply‑chain risk.” The contract frames permitted uses and says technical safeguards will be built into models to limit deployment for mass domestic surveillance and fully autonomous lethal weapons, according to company statements. OpenAI CEO Sam Altman framed the deal as consistent with legal limits and asserted the company put safeguards into the agreement; Anthropic said it would seek legal relief after the unprecedented government designation. The administration’s action marks a new inflection point in how federal procurement is shaping which AI vendors universities and research centers can partner with. Campus research offices and sponsored‑programs leaders should expect renewed scrutiny of classified‑research arrangements, novel compliance demands, and potential shifts in funding flows: institutions that hosted classified collaborations or subcontracted with Anthropic‑linked teams may face operational and reputational disruption as agencies reallocate work.