K–12 and higher education IT leaders report that generative AI has significantly upgraded phishing capabilities, producing highly convincing attacks that mimic staff and administrators. Technology directors describe attackers using AI to tailor messages and to generate deepfake audio and video, raising the stakes for credential theft and ransomware incidents across campuses. Some districts are experimenting with generative AI for security tasks, but results are mixed: enterprise tools can help automate detection, yet many districts lack staff and funding to deploy advanced systems. Educators and tech officers emphasize training, endpoint security, and treating all prompts as potential public records. Cybersecurity teams are advising institutions to increase pen testing, lock down data inputs to public models, and invest in layered defenses as attackers leverage the same AI productivity tools being adopted on campuses.
Get the Daily Brief