New joint research from Anthropic, Oxford and Stanford shows advanced reasoning AIs can be manipulated through 'chain‑of‑thought hijacking,' enabling attackers to bypass safety guardrails. The study found attack success rates climb as reasoning chains lengthen, affecting major commercial models from OpenAI, Anthropic, Google and others. The findings raise new concerns for universities deploying large reasoning models for research, instruction, and administrative use. CIOs and AI governance teams must revise threat models, partner with vendors on mitigations, and tighten access controls for high‑capability systems.