OpenAI reached a preliminary agreement with the Pentagon to provide its models for classified systems while the government issued an unprecedented designation that labeled Anthropic a “supply‑chain risk.” The moves came amid days of high‑stakes bargaining between the Defense Department and commercial AI vendors over usage limits and contractual safeguards. OpenAI said its deal includes technical measures intended to block mass domestic surveillance and autonomous‑weapons deployments, while Anthropic says the government rejected its requests to lock those protections into contracts. Officials call the supply‑chain designation an extraordinary tool to pressure a vendor; legal experts warn it raises questions about precedent and the separation between policy and procurement. The developments matter for higher education because they alter the landscape for research partnerships, classified‑environment collaborations, and institutional policies on off‑the‑shelf models. Universities that host government labs, train defense engineers, or license AI tools face new legal and reputational trade‑offs when choosing vendors.
Get the Daily Brief