Audio-Visual speaker recognition promises higher performance than any single modal biometric systems. This paper further improves the novel approach based on Dynamic Bayesian Networks (DBNs) to bimodal speaker recognition. In the present paper, we investigate five different topologies of feature-level fusion framework using DBNs. We demonstrate that the performance of multimodal systems can be further improved by modeling the correlation of between the speech features and the face features appropriately. The experiment conducted on a multi-modal database of 54 users indicates promising results, with an absolute improvement of about 7.44% in the best case and 3.13% in the worst case compared with single modal speaker recognition system. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Li, D., Yang, Y., & Wu, Z. (2006). Dynamic bayesian networks for audio-visual speaker recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3832 LNCS, pp. 539–545). https://doi.org/10.1007/11608288_72
Mendeley helps you to discover research relevant for your work.