Bias mitigation techniques in image classification: fair machine learning in human heritage collections

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A major problem with using automated classification systems is that if they are not engineered correctly and with fairness considerations, they could be detrimental to certain populations. Furthermore, while engineers have developed cutting-edge technologies for image classification, there is still a gap in the application of these models in human heritage collections, where data sets usually consist of low-quality pictures of people with diverse ethnicity, gender, and age. In this work, we evaluate three bias mitigation techniques using two state-of-the-art neural networks, Xception and EfficientNet, for gender classification. Moreover, we explore the use of transfer learning using a fair data set to overcome the training data scarcity. We evaluated the effectiveness of the bias mitigation pipeline on a cultural heritage collection of photographs from the 19th and 20th centuries, and we used the FairFace data set for the transfer learning experiments. After the evaluation, we found that transfer learning is a good technique that allows better performance when working with a small data set. Moreover, the fairest classifier was found to be accomplished using transfer learning, threshold change, re-weighting and image augmentation as bias mitigation methods.

Cite

CITATION STYLE

APA

Pablo, D. O., Badri, S., Norén, E., & Nötzli, C. (2023). Bias mitigation techniques in image classification: fair machine learning in human heritage collections. Journal of WSCG, 31(1–2), 53–62. https://doi.org/10.24132/JWSCG.2023.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free