Colleges are confronting a two-sided AI problem: students largely learn generative tools on their own while emergent agentic systems create fresh security and privacy risks. An Honorlock survey of about 1,000 students found only 31% knew of AI courses at their institutions and fewer than 20% had taken one, even as a majority used AI for low-level coursework tasks. The pattern suggests universities are ceding early AI literacy to informal learning rather than structured curriculum. At the same time, open-source autonomous assistants like Moltbot — which can access files, credentials and web services and communicate externally — have gone viral, prompting cybersecurity warnings about persistent memory, prompt injection and credential exposure. For higher-education IT, research offices and faculty governance bodies this means rapidly scaling two priorities: curriculum that trains ethical, workplace-ready AI use, and tightened security controls, vendor vetting and data-governance rules to manage agentic tools.
Get the Daily Brief