Multimodal Biometric System Based on Autoencoders and Learning Vector Quantization

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a bimodal biometric verification system based on face and voice traits. The face characteristics are extracted using an autoencoder neural network. The voice characteristics are extracted using Mel-frequency cepstral coefficients. The matching procedure uses the Euclidean distance between one sample and the cluster centers obtained for each subject, through a learning vector quantization machine. The data fusion process is done through a simple normalization and sum of individual scores of the face-trait and the voice-trait. Several experiments are carried out varying the number of cluster centers, the size of the encoder output and the number of frames used for representing the voice trait of a subject. The performance of the biometric system is evaluated using the area under a receive operating characteristic (AUC of a ROC curve). The following performances are obtained: voice-trait biometric system: AUC = 0.877; face-trait biometric system: AUC = 0.94 and bimodal biometric system: AUC = 0.98. The database used, the MOBIO, was collected from 50 individuals (37 male and 13 female) using mobile phones.

Cite

CITATION STYLE

APA

Costa-Filho, C. F. F., Negreiro, J. V., & Costa, M. G. F. (2022). Multimodal Biometric System Based on Autoencoders and Learning Vector Quantization. In IFMBE Proceedings (Vol. 83, pp. 1611–1617). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-70601-2_236

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free