Professors are discovering that classroom proximity no longer prevents AI‑enabled academic dishonesty. Instructors report students using agentic and chat‑based tools to generate answers for participation, written work and timed assessments, undermining traditional safeguards that relied on in‑room supervision and honor codes. A separate incident involving an app that automated student work—praised by its creators as a demonstration—prompted faculty and academic‑integrity officers to warn campuses about tools that can do assignments end‑to‑end. Universities are scrambling to update honor codes, redesign assessments around authentic, in‑person demonstrations of learning, and accelerate AI literacy training for faculty and students. Academic leaders note that rapidly evolving agentic tools require both technical detection strategies and curriculum redesign; lawyers and accreditation officers are watching how institutions balance academic freedom, privacy, and enforcement.