Noise adaptive stream weighting in audio-visual speech recognition

97Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.

Cite

CITATION STYLE

APA

Heckmann, M., Berthommier, F., & Kroschel, K. (2002). Noise adaptive stream weighting in audio-visual speech recognition. Eurasip Journal on Applied Signal Processing, 2002(11), 1260–1273. https://doi.org/10.1155/S1110865702206150

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free