An open‑source autonomous assistant, Moltbot, has gone viral, revealing productivity potential alongside serious security vulnerabilities; cybersecurity firms warn that agentic bots with deep system access create novel attack surfaces. Moltbot’s design—persistent memory, access to root files and credential stores, and external communications—has prompted warnings about delayed-execution and prompt‑injection attacks. At the same time, education researchers and policy groups are publishing cautionary analyses about generative AI in classrooms: a Brookings report flagged risks to independent thinking and teacher–student trust even as surveys show widespread teacher and student use. Schools and colleges face decisions about governance, classroom policy, digital‑credential integrity, and cybersecurity standards for AI tools. Why it matters: Institutions must simultaneously update cybersecurity controls and pedagogical policies—ranging from identity and data protections for campus‑wide AI agents to assessment integrity and faculty training—while balancing the productivity benefits AI can offer in research and instruction.