New AI tools that autonomously complete coursework inside learning management systems have surfaced on campuses, prompting urgent conversations about academic integrity and assessment design. A product dubbed "Einstein" claims it can log into Canvas, watch lectures, write essays and submit assignments—demonstrating how lightweight agent frameworks can automate student tasks. Early users of autonomous AI agents describe fragile performance and frequent failures: agents have deleted inboxes, misapplied instructions, and required close oversight. Researchers and instructional designers warn that current detection methods are unlikely to keep pace with agent sophistication and advocate redesigning assessments to require human‑centered evidence and iterative, in‑person demonstrations of mastery. University IT and academic‑integrity offices are evaluating policy responses that balance educational innovation with protections against misuse, while faculty leaders consider shifting toward authentic, process‑based assessment that is harder to gamify with autonomous agents.