The White House ordered federal agencies to stop using Anthropic’s AI after weeks of tense negotiations with the Defense Department; that directive came as Anthropic’s CEO, Dario Amodei, publicly rejected the Pentagon’s demand to relinquish contractual limits on military uses. The exchange underscores a widening rift between national‑security priorities and AI companies’ public ethics commitments. President Trump’s order set a six‑month phaseout for Anthropic’s use across agencies; the Pentagon threatened to label the company a “supply‑chain risk” or to invoke extraordinary authorities if the firm did not comply. Anthropic said it could not ‘in good conscience’ remove safeguards prohibiting mass domestic surveillance and fully autonomous weapons, arguing those uses cross ethical red lines. Universities and research partners that have built classified workflows around Anthropic now face transition risks. The standoff raises questions about how academic collaborations, classified research, and campus partnerships with AI vendors will be governed going forward.