Senior Pentagon officials say they experienced a “whoa moment” when they realized how dependent the Defense Department had become on Anthropic’s AI for classified operations, underscoring supply‑chain risk in military‑grade AI services. Emil Michael, the Pentagon’s undersecretary for research and engineering, described rapid alarm inside defense leadership over single‑vendor reliance. The vendor tension spilled into the tech sector as Caitlin Kalinowski, OpenAI’s robotics and hardware lead, resigned citing principled objections to surveillance and lethal autonomy after the company struck a classified agreement with the Defense Department. OpenAI defended its red lines—no domestic surveillance and no autonomous lethal weapons—but the exit highlights employee resistance and governance friction at companies that partner with government. For universities, the episode raises stakes for collaborations with defense contractors and AI providers: classified partnerships may accelerate but also invite faculty and staff scrutiny, export‑control complexity, and ethical review. Research offices should reassess dependency risks, negotiate fallback options, and tighten institutional review on dual‑use projects.
Get the Daily Brief