A study reviewing tens of thousands of admissions essays at a selective college found that low-income applicants are more likely to submit AI-generated essays, and that essay language became more homogeneous after AI tools became available in 2022. The review points to a measurable change in writing patterns rather than merely a debate about authenticity. The report’s findings also raise equity questions for admissions evaluation: if AI usage differs by socioeconomic status, then essay content may reflect tool access and coaching rather than differences in students’ backgrounds. For admissions offices, the result may intensify scrutiny of the reliability of essay signals and the need to update review rubrics and verification practices. The development intersects with ongoing higher education policy attention to AI in academic and administrative settings, including how institutions detect AI-generated content and how they interpret writing quality. While the study focuses on a specific admissions context, the direction is clear: admissions processes increasingly face standardization effects from widespread AI tool availability.