OpenAI reached an agreement with the Pentagon to provide its models for classified use, a deal announced days after Anthropic was told to stop working with federal agencies. OpenAI said the contract preserves technical and legal guardrails — including commitments around human judgment in lethal use cases and protections against domestic mass surveillance — though the company and the department characterize those measures differently. OpenAI CEO Sam Altman told employees the Pentagon agreed to allow the company to implement its own safety stack and to limit deployments to cloud environments rather than edge systems. OpenAI framed the contract as a model for how private AI firms can provide capabilities while preserving named 'red lines.' The deal shifts the private‑sector landscape: universities and research centers that partner with these firms now face a bifurcated supplier environment where access to classified projects depends on fraught negotiations over ethics, liability and operational control.