Analyzing thousands of essays, Cornell University researchers found that AI-written admission essays are notably generic and identifiable compared to human-authored ones. Attempts to customize these essays with specific writer traits often backfired, resulting in even more robotic narratives. An AI-based detection tool trained by the team achieved near-perfect accuracy distinguishing AI essays from human ones. This highlights challenges institutions face in maintaining admissions integrity amid growing use of AI writing tools, prompting questions about assessment processes and authenticity in applicant evaluations.