Higher‑education institutions are sharpening AI strategies while confronting new security risks from autonomous agents. Purdue published a campuswide AI plan aimed at integrating generative tools into curriculum and making graduates “AI competent,” seeking a coordinated approach to pedagogy and policy. The move signals a shift from reactive bans to institution‑level frameworks for instruction and assessment. At the same time, the viral open‑source agent Moltbot has raised alarms in cybersecurity circles: researchers warn agentic assistants that hold credentials, browser history and persistent memory pose novel data‑exfiltration and delayed‑execution risks. Campus IT and privacy officers now face a dual task—adopting AI for pedagogy while hardening endpoints and credential controls to prevent agent‑driven breaches.