Image representations learned with unsupervised pre-training contain human-like biases

89Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.

Abstract

Recent advances in machine learning leverage massive datasets of unlabeled images from the web to learn general-purpose image representations for tasks from image classification to face recognition. But do unsupervised computer vision models automatically learn implicit patterns and embed social biases that could have harmful downstream effects? We develop a novel method for quantifying biased associations between representations of social concepts and attributes in images. We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset curated from internet images, automatically learn racial, gender, and intersectional biases. We replicate 8 previously documented human biases from social psychology, from the innocuous, as with insects and flowers, to the potentially harmful, as with race and gender. Our results closely match three hypotheses about intersectional bias from social psychology. For the first time in unsupervised computer vision, we also quantify implicit human biases about weight, disabilities, and several ethnicities. When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.

Cite

CITATION STYLE

APA

Steed, R., & Caliskan, A. (2021). Image representations learned with unsupervised pre-training contain human-like biases. In FAccT 2021 - Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 701–713). Association for Computing Machinery, Inc. https://doi.org/10.1145/3442188.3445932

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free