Through a Fair Looking-Glass: Mitigating Bias in Image Datasets

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the recent growth in computer vision applications, the question of how fair and unbiased they are has yet to be explored. There is abundant evidence that the bias present in training data is reflected in the models, or even amplified. Many previous methods for image dataset de-biasing, including models based on augmenting datasets, are computationally expensive to implement. In this study, we present a fast and effective model to de-bias an image dataset through reconstruction and minimizing the statistical dependence between intended variables. Our architecture includes a U-net to reconstruct images, combined with a pre-trained classifier which penalizes the statistical dependence between target attribute and the protected attribute. We evaluate our proposed model on CelebA dataset, compare the results with two state-of-the-art de-biasing method, and show that the model achieves a promising fairness-accuracy combination.

Cite

CITATION STYLE

APA

Rajabi, A., Yazdani-Jahromi, M., Garibay, O. O., & Sukthankar, G. (2023). Through a Fair Looking-Glass: Mitigating Bias in Image Datasets. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14050 LNAI, pp. 446–459). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-35891-3_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free