A new experimental study found that identical AI-generated resumes receive different evaluations when the only variation is the applicant’s name and implied gender. Researchers distributed matched applications to reviewers who were told the documents were created using AI. Reviewers were 22% more likely to question the trustworthiness of a female-named candidate (Emily Clarke) than a male-named counterpart (James Clarke), and the female candidate’s CV was twice as likely to raise doubts about competence and job ability. Commentary included claims that the woman “can’t even write a CV herself,” while the man’s AI use was framed as reasonable assistance. The findings were presented by Zehra Chatoo, founder of think tank Code For Good Now, and discussed alongside related research on adoption gaps and concerns that women may face greater perceived penalties for using AI tools. The work points to a persistent “AI gender gap” in perceived integrity and risk. For higher education and employers recruiting graduates, the study highlights the need for bias monitoring in AI-enabled hiring processes and clearer disclosure practices when applicants use AI for professional work.
Get the Daily Brief