An iterative decoding algorithm for fusion of multimodal information

25Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.

Cite

CITATION STYLE

APA

Shivappa, S. T., Rao, B. D., & Trivedi, M. M. (2008). An iterative decoding algorithm for fusion of multimodal information. Eurasip Journal on Advances in Signal Processing, 2008. https://doi.org/10.1155/2008/478396

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free