This study explores the utilization of generative Artificial Intelligence (AI) to create racially ambiguous avatars for job candidate interviews, aiming to enhance inclusive hiring practices and reduce implicit bias. With diversity recognized for boosting organizational performance, addressing hiring discrimination becomes crucial. This research investigates how avatars can anonymize candidates’ racial identities, potentially transforming “blind” hiring to minimize subconscious biases. Through an iterative pilot process engaging 30 female experienced-in-hiring participants with information technology backgrounds, the avatars representing White, Asian, and race-neutral candidate females, and corresponding candidate job descriptions, were refined, setting the foundation for a follow-up field experiment. This experiment will examine avatars’ effects on hiring recommendations and decisions by managers, manipulating strong versus ambiguous candidate qualifications, contributing insights into overcoming implicit biases. The initial pilot findings here suggest promising directions for technology’s role in fostering equitable recruitment, providing a groundwork for future studies on technology-enhanced inclusive hiring strategies.
CITATION STYLE
Trifilo, A., & Blau, G. (2024). Developing Racially Ambiguous Job Candidate Avatars with Strong versus Ambiguous Job Candidate Backgrounds in Preparation for a Field Experiment for Enhancing Inclusive Hiring and Reducing Implicit Bias. Journal of Behavioral and Applied Management, 24(1). https://doi.org/10.21818/001c.115890
Mendeley helps you to discover research relevant for your work.