Surveys and academic research show growing alarm among students and instructors that generative AI is undermining critical thinking and producing highly similar, model‑generated essays. A RAND Corp. survey found roughly seven in ten students across grade levels worry AI use erodes their reasoning skills, while usage rates for chatbot tools rose substantially from mid to late 2025. A separate multi‑institution research project documented “inter‑model homogeneity”: different large language models often generate strikingly similar responses to open prompts, producing formulaic prose that can mask students’ gaps in reasoning. That convergence complicates detection and raises questions for assessment design. Colleges must respond with explicit instruction on tool use, scaffolded assignments that require process documentation, and faculty development on AI literacy. Academic‑integrity policies alone will not resolve the problem; assessment redesign and curricular change are already underway at many campuses.