The federal government’s split with Anthropic and a nascent deal with OpenAI mark a rapid recalibration in how the Pentagon will source frontier AI. Officials designated Anthropic a 'supply‑chain risk' and President Trump ordered federal agencies to cease using the company’s technology, while OpenAI announced a classified‑use agreement that the Pentagon appears willing to accept. The dispute centers on limits companies seek to impose on military applications — Anthropic resisted allowing unfettered use for mass domestic surveillance and autonomous weapons and was subsequently sidelined. OpenAI’s agreement reportedly embeds similar limitations in arrangements the company says are compatible with Pentagon needs. Anthropic has threatened legal action. The episode has drawn senior defense officials including Emil Michael into public conflict with AI firms. Universities with defense partnerships, classified research programs, or joint AI centers face immediate implications: vendor shifts could disrupt contracts, classified‑environment toolchains, and ongoing graduate and faculty collaborations. Research offices and tech transfer offices should prepare contingency plans and review clauses in defense contracts tied to specific commercial models.