A new warning argues that some of the most dangerous AI-driven threats are not classic cyber intrusions, but scalable fraud that bypasses safeguards by targeting people. The analysis discusses a meeting of U.S. financial executives and regulators around Anthropic’s Mythos model, noting concerns about systems that can identify and exploit vulnerabilities. The piece contends that while cyber risk remains serious, a parallel threat is accelerating: AI lowers the cost of producing hyper-personalized social engineering messages, including voice and video deepfakes, that can convince recipients to authorize transfers. In these cases, the “system isn’t hacked”—the customer is persuaded fast enough to defeat monitoring tuned to human fraud patterns. Higher education relevance is indirect but urgent for campuses: institutions with complex financial operations and large student data footprints face new compliance and training demands, as well as heightened need for identity verification and fraud response playbooks.