A student‑developed AI tool from University of Maryland alumni aims to teach business students how to write case‑study responses and to reduce cheating by providing structured prompts and practice scenarios. The tool, reported by Ashley Mowreader, focuses on improving critical thinking while giving faculty alternative assessment strategies to blunt misuse of generative models. At the same time, educators warn campuses must teach students how to navigate technology’s power dynamics — from deepfakes to opaque recommendation engines. Authors and instructors stress that digital literacy requires both technical fluency and ethical reflection so graduates can critically evaluate AI outputs and resist manipulation. The two items together illustrate a pragmatic campus response: faculty and students are building tools while simultaneously calling for curricular shifts to teach algorithmic literacy and source evaluation. Universities will need to update assessment design, invest in training for faculty, and coordinate career services to help students apply AI responsibly in professional contexts.