A new AI procurement fight between the Pentagon and Anthropic underscores how higher education-adjacent AI policy will intersect with research, ethics, and First Amendment arguments. The dispute cited in the commentary centers on Anthropic’s refusal to allow the Defense Department to use Claude for domestic mass surveillance and lethal autonomous warfare, after which the government canceled a $200 million contract. The Pentagon designated Anthropic a “supply-chain risk,” with Secretary of Defense Pete Hegseth citing the company’s “woke” approach as a national security concern. Anthropic sued, arguing the designation is based on protected views about AI safety and violates First Amendment protections. Universities involved in AI partnerships, research commercialization, and responsible AI programs are watching closely as AI vendors navigate legal exposure tied to government procurement decisions.