A new analysis of admissions essays at a selective college found that low-income applicants were more likely to submit AI-generated essays, and the use of AI tools corresponds with more homogeneous language in applications. The findings raise new compliance and equity questions for admissions offices managing authenticity, transparency, and review reliability. The work focuses on tens of thousands of essays and compares writing patterns after AI tools became broadly available in 2022. Researchers reported a noticeable shift in language uniformity, consistent with templated or model-driven drafting. For admissions leaders, the results are less about scoring essays and more about how screening processes can inadvertently disadvantage students who may rely on AI for writing support—especially where coaching resources are limited. Institutions increasingly face the operational challenge of distinguishing between legitimate drafting assistance and AI outputs that flatten individuality, while also ensuring that any policy response does not amplify barriers for students with fewer means to access editing support.