The Pentagon’s public split with Anthropic and OpenAI’s hurried deal with the Defense Department crystallized an immediate governance crisis for AI suppliers and their university partners. The Pentagon labeled Anthropic a “supply‑chain risk” after the company refused terms the Defense Department said it needed; Anthropic has signaled legal pushback. OpenAI moved to renegotiate its own agreement, adding explicit language aimed at limiting domestic surveillance uses. The dispute has rippled into higher education: universities that partner with AI firms for research, procurement, or classified collaborations now face questions about vendor risk, export controls, and the enforceability of industry safeguards. Key actors include Anthropic (CEO Dario Amodei), OpenAI (CEO Sam Altman), the U.S. Department of Defense, and legal commentators who warn of gaps between contractual pledges and enforceable limits. Campus research offices and contracting officers should expect increased scrutiny of AI supplier terms, and more institutions will likely demand clearer legal guarantees about how models may be used in defense, surveillance, or dual‑use research contexts.