A public rupture between the Pentagon and Anthropic escalated this week when the White House ordered federal agencies to cease using Anthropic’s models and the Defense Department threatened to designate the company a “supply chain risk.” Anthropic CEO Dario Amodei said the company "cannot in good conscience" accept contract language that would permit mass domestic surveillance or fully autonomous weapons, leaving the firm facing potential exclusion from classified government work. In parallel, OpenAI reached an agreement to supply the Pentagon with AI models for classified use while preserving named technical and policy safeguards, according to company statements. OpenAI CEO Sam Altman told staff the deal would allow OpenAI to build the government-facing "safety stack" and retain control over model deployment in cloud rather than edge systems. The standoff has direct implications for university research partnerships and defense-funded labs that rely on commercial LLMs. Universities with defense contracts will need rapid legal and compliance reviews, research offices said, and faculty working on national-security projects face disruption if Anthropic’s tools are cut from classified workflows.