College libraries are evolving into on‑campus AI sandboxes where students and faculty can safely test generative tools and learn responsible use, while IT leaders warn about “shadow AI”—unauthorized models and agents deployed outside institutional controls. Institutions like Bryn Mawr are piloting library‑led AI literacy programs; simultaneously, IT offices urge governance, data‑protection policies and vendor vetting to avoid leaks and compliance risks. Campus leaders must balance hands‑on learning with controls for FERPA, copyright and research integrity.