The Pentagon reached terms with OpenAI to use its models in classified systems while simultaneously designating Anthropic a “supply‑chain risk,” an unprecedented federal action that threatens Anthropic’s government business. Legal and policy experts warned the designation — the first of its kind — raises questions about procurement leverage, vendor redlines, and the balance between national security and vendor governance. OpenAI said its contract includes technical promises to prevent use in domestic mass surveillance and lethal autonomous weapons; CEO Sam Altman told staff the deal would allow OpenAI to build its own safety stack and restrict deployments to controlled cloud environments. Anthropic has signaled legal action to challenge the designation and criticized the government’s demands as overbroad. For universities, the development changes the landscape for AI research partnerships, classified‑research access, and funding. Campus AI labs that partner with defense contractors or host clearance‑barred projects should reassess vendor dependencies, compliance requirements, and how government redlines could affect sponsored work and student researchers.