Defense officials concluded that overreliance on a single AI vendor created an operational vulnerability, prompting rapid policy moves. Emil Michael, the Defense Department’s under secretary for research and engineering, described a “whoa moment” when leaders realized Anthropic’s Claude was indispensable in classified settings and that a loss of access could endanger operations. Within days, the Pentagon formally designated Anthropic a supply‑chain risk and directed contractors to limit use of Claude while ordering the department to phase out the model over six months. The actions forced prime contractors and defense labs to scramble for alternate models and accelerated negotiations with other AI providers, including OpenAI, Google, and xAI. For universities and research centers engaged in defense-sponsored work, the episode raises immediate procurement and compliance questions—which vendors are permitted in classified enclaves, how to validate models for controlled research, and how to maintain continuity of research and operational systems while contracts and security reviews proceed. The designation signals tighter federal scrutiny of private AI limits and potential downstream effects on campus collaborations with national security programs.