A Deep Dive into Dataset Imbalance and Bias in Face Identification

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

As the deployment of automated face recognition (FR) systems proliferates, bias in these systems is not just an academic question, but a matter of public concern. Media portrayals often center imbalance as the main source of bias, i.e., that FR models perform worse on images of non-white people or women because these demographic groups are underrepresented in training data. Recent academic research paints a more nuanced picture of this relationship. However, previous studies of data imbalance in FR have focused exclusively on the face verification setting, while the face identification setting has been largely ignored, despite being deployed in sensitive applications such as law enforcement. This is an unfortunate omission, as 'imbalance' is a more complex matter in identification; imbalance may arise in not only the training data, but also the testing data, and furthermore may affect the proportion of identities belonging to each demographic group or the number of images belonging to each identity. In this work, we address this gap in the research by thoroughly exploring the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.

Cite

CITATION STYLE

APA

Cherepanova, V., Reich, S., Dooley, S., Souri, H., Dickerson, J., Goldblum, M., & Goldstein, T. (2023). A Deep Dive into Dataset Imbalance and Bias in Face Identification. In AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 229–247). Association for Computing Machinery, Inc. https://doi.org/10.1145/3600211.3604691

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free