Senior technologists left major AI teams and companies saw internal friction as defense contracts and national security use cases collided with corporate safety pledges. Caitlin Kalinowski, who led hardware and robotics engineering at OpenAI, resigned saying the company and sector needed more deliberation on surveillance and lethal autonomy. Her departure followed the Pentagon’s fragile negotiations with Anthropic and the White House’s move to place Anthropic off certain federal systems. Anthropic’s CEO also issued an apology after a leaked memo criticizing OpenAI staff surfaced as the company confirmed it received the Defense Department’s supply‑chain designation. The episode has heightened employee unrest at AI firms and left university researchers and technology transfer offices weighing ethical guardrails, sponsor requirements, and risks to academic collaboration. Campus labs that partner with defense or local governments now face new pressure to codify vendor risk assessments and to clarify permissible AI usage in sponsored research.
Get the Daily Brief