AI training-data and tooling vendors are facing escalating security threats, with downstream implications for higher education AI readiness. Mercor, a $10 billion AI startup that recruits experts to provide training data used by major AI companies, confirmed a major data breach. The company says it was affected by a supply-chain attack linked to LiteLLM, an open-source library widely used to connect applications to AI services, and that a third-party forensics investigation is under way. Unconfirmed reports circulating online suggested exposure could include sensitive project details for customers, which include Anthropic, OpenAI and Meta. The breach also points to broader operational vulnerabilities for campuses: institutions increasingly depend on third-party AI tooling for research workflows, teaching support, and administrative automation. A supply-chain compromise can quickly propagate to institutional data systems if vendor controls are insufficient. For university CIOs and information security teams, the immediate action is tightening vendor risk management—reviewing API access, audit logs, and data-handling terms for AI-integrated platforms—while monitoring for credential compromise and unauthorized data flows.