A review of tens of thousands of admissions essays at a selective college found that low-income students were more likely to submit AI-generated writing, with the language described as becoming more homogeneous after AI tools became widely available in 2022. The reporting points to potential distributional effects in AI use, with concerns that access to institutional knowledge and support may shape who relies on generative tools. The finding matters for admissions integrity and student equity because essay-based evaluation can influence offers and pathways—particularly at selective institutions where academic narrative review is central. If AI assistance is being used unevenly, it can raise questions about whether the admissions process is measuring applicants consistently. While the dataset described in the reporting is limited to one selective college’s essay review, it aligns with broader concerns about AI detection, authorship verification, and how admissions offices should respond to tool proliferation without disadvantaging applicants who face fewer resources. For higher education leaders, the issue intersects with student success and fairness: ensuring that policy responses are targeted, transparent, and do not further penalize applicants from lower-income backgrounds who already face barriers.