Efficient Audio-Visual Speaker Recognition via Deep Heterogeneous Feature Fusion

8Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Audio-visual speaker recognition (AVSR) has long been an active research area primarily due to its complementary information for reliable access control in biometric system, and it is a challenging problem mainly attributes to its multimodal nature. In this paper, we present an efficient audio-visual speaker recognition approach via deep heterogeneous feature fusion. First, we exploit a dual-branch deep convolutional neural networks (CNN) learning framework to extract and fuse the high-level semantic features of face and audio data. Further, by considering the temporal dependency of audio-visual data, we embed the fused features into a bidirectional Long Short-Term Memory (LSTM) networks to produce the recognition result, though which the speakers acquired under different challenging conditions can be well identified. The experimental results have demonstrated the efficiency of our proposed approach in both audio-visual feature fusion and speaker recognition.

Cite

CITATION STYLE

APA

Liu, Y. H., Liu, X., Fan, W., Zhong, B., & Du, J. X. (2017). Efficient Audio-Visual Speaker Recognition via Deep Heterogeneous Feature Fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10568 LNCS, pp. 575–583). Springer Verlag. https://doi.org/10.1007/978-3-319-69923-3_62

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free