Ethics of AI: Do the Face Detection Models Act with Prejudice?

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work presents a study on an ethical issue in Artificial Intelligence related to the presence of racist biases by detecting faces in images. Our analyses were performed on a real-world system designed to detect fraud in public transportation in Salvador (Brazil). Our experiments were conducted by taking into account three steps. Firstly, we individually analyzed a sample of images and added specific labels related to the users’ gender and race. Then, we used well-defined detectors, based on different Convolutional Neural Network architectures, to find faces in the previously labeled images. Finally, we used statistical tests to assess whether or not there is some relation between the error rates and such labels. According to our results, we had noticed important biases, thus leading to higher error rates when images were taken from black people. We also noticed errors are more likely in both black men and women. Based on our conclusions, we recall the risk of deploying computational software that might affect minority groups that are historically neglected.

Cite

CITATION STYLE

APA

Ferreira, M. V., Almeida, A., Canario, J. P., Souza, M., Nogueira, T., & Rios, R. (2021). Ethics of AI: Do the Face Detection Models Act with Prejudice? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13074 LNAI, pp. 89–103). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-91699-2_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free