Measuring the gender and ethnicity bias in deep models for face recognition

21Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We explore the importance of gender and ethnic attributes in the decision-making of face recognition technologies. Our work is in part motivated by the new European regulation for personal data protection, which forces data controllers to avoid discriminative hazards while managing sensitive data like biometric data. The experiments in this paper are aimed to study what extent sensitive data like gender or ethnic origin attributes are present in the most common face recognition networks. For this, our experiments include two popular pre-trained networks: VGGFace and Resnet50. Both pre-trained models are able to classify gender and ethnicity easily (over 95% of performance) even suppressing 80% of the neurons in their embedding layers. The experimentation is conducted on a publicly available database known as Labeled Faces in the Wild with more than 13000 images of faces with a huge range of poses, ages, races and nationalities.

Cite

CITATION STYLE

APA

Acien, A., Morales, A., Vera-Rodriguez, R., Bartolome, I., & Fierrez, J. (2019). Measuring the gender and ethnicity bias in deep models for face recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11401 LNCS, pp. 584–593). Springer Verlag. https://doi.org/10.1007/978-3-030-13469-3_68

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free