Bimodal Biometrics Using EEG-Voice Fusion at Score Level Based on Hidden Markov Models

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an experiment on bimodal biometrics based on fusion at score level of electroencephalographic (EEG) and voice signals. The experiments described have been carried out using an open database with commands in Spanish, available to the academic community for research purposes. The accuracy of user identification systems is evaluated using Hidden Markov Models (HMM) as classifiers for both EEG and voice modalities. Feature extraction is implemented using Mel-Frecuency Cepstrum Coeficients (MFCC) and Wavelet analysis for voice and EEG, respectively. From the scores generated by each independent system, data fusion is proposed at a score level for multiple cases of weighted sums based on weighted arithmetic means, and for the score product case using a geometric mean scheme. Performance evaluation indicates a recognition rate of 90% in average. The results obtained confirm the expected tendency, validating the convenience and usefulness of multimodal user identification systems while setting the ground for future studies involving data fusion at different levels and using other classifiers.

Cite

CITATION STYLE

APA

Moreno-Rodriguez, J. C., Ramirez-Cortes, J. M., Arechiga-Martinez, R., Gomez-Gil, P., & Atenco-Vazquez, J. C. (2020). Bimodal Biometrics Using EEG-Voice Fusion at Score Level Based on Hidden Markov Models. In Studies in Computational Intelligence (Vol. 862, pp. 645–657). Springer. https://doi.org/10.1007/978-3-030-35445-9_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free