The Pentagon’s fallout with Anthropic and OpenAI’s rapid dealmaking with the Defense Department have forced a new reckoning for universities that supply talent, research partnerships, and data. The Pentagon’s designation of Anthropic as a supply‑chain risk and OpenAI’s move to renegotiate terms underscore how national security priorities now shape access to leading models and contract eligibility. The Pentagon publicly terminated its Anthropic contract after disagreements over restrictions on surveillance and lethal uses, a step that federal officials framed as a supply‑chain security action. Anthropic has signaled legal and policy pushback; university research offices that had begun partnerships with commercial AI firms now face abrupt procurement and compliance questions. OpenAI told staff it rushed a deal with the Pentagon and is renegotiating to add explicit prohibitions—most notably language barring intentional domestic surveillance and restricting certain intelligence components—aiming to preserve commercial work with defense while addressing privacy and academic‑research concerns. Campus technology and research leaders will need to reassess vendor controls, export and classification risk, and IRB/ethics review processes as military clauses become decisive for partner selection.