Speaker tracking using multi-modal fusion framework

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper introduces a framework by which multi-modal sensory data can be efficiently and meaningfully combined in the application of speaker tracking. This framework fuses together four different observation types taken from multi-modal sensors. The advantages of this fusion are that weak sensory data from either modality can be reinforced, and the presence of noise can be reduced. We propose a method of combining these modalities by employing a particle filter. This method offers satisfied real-time performance. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Saeed, A., Al-Hamadi, A., & Heuer, M. (2012). Speaker tracking using multi-modal fusion framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7340 LNCS, pp. 539–546). https://doi.org/10.1007/978-3-642-31254-0_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free