Anthropic’s CEO Dario Amodei said the company “cannot in good conscience accede” to the Defense Department’s demand that it remove contractual limits on how the military can use its Claude models. The statement came after the Pentagon gave Anthropic a firm deadline to accept broader rights for “any lawful purpose,” or risk losing a key government contract. The dispute has escalated into a public standoff: Defense officials have threatened to designate Anthropic a “supply‑chain risk,” a move that could freeze the company out of many government and contractor relationships. Anthropic says the department’s contract language would permit uses — including mass surveillance and fully autonomous weapons — that the company built safeguards to prevent. The clash pits national‑security officials who prioritize unfettered operational use against an industry cohort demanding technical and ethical guardrails. For universities and research centers that partner with AI firms or train students in frontier AI, the outcome could reshape procurement norms, academic‑industry collaborations, and the legal terms governing model access.
Get the Daily Brief