Colleges are piloting AI agents to automate administrative tasks and student services while faculty groups publish classroom guides on integrating generative AI. New pieces outline how autonomous AI agents could handle procedural student work such as schedule changes and prior‑learning assessments, and urge controls to protect privacy and academic integrity. Researchers and edtech developers are also training models to spot student errors in math work and to surface misconceptions in real time. Those efforts, supported by partnerships and competitions, aim to give instructors actionable data without requiring deep ML expertise. Universities face implementation choices: governance of datasets, vendor selection, and faculty training are core priorities. Campus leaders told trustees that scaling AI pilots will demand clear policies on data privacy, assessment integrity, and how agentic systems augment — not replace — human advising and pedagogy.