New findings suggest AI-assisted job applications may be judged differently depending on the candidate’s gender, raising concerns about a “perceived integrity” gap. A study by Zehra Chatoo’s think tank Code For Good Now used AI to generate identical résumés for an Emily Clarke and James Clarke and then tested reviewer responses after disclosure that the documents were AI-assisted. Reviewers were more likely to question trustworthiness for the female candidate: Emily’s résumé drew a 22% higher probability of trust concerns, and feedback also doubted competence and ability to do the job. The study’s interpretation centers on how evaluators react to “effort” versus “integrity” when AI is involved, with gender shaping those judgments. The results feed broader adoption barriers reported in earlier work, including research showing college-aged users are embracing AI while professional users—especially women—may be more risk-averse about reputational costs from AI reliance.
Get the Daily Brief