A comparison of model validation techniques for audio-visual speech recognition

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper implements and compares the performance of a number of techniques proposed for improving the accuracy of Automatic Speech Recognition (ASR) systems. As ASR that uses only speech can be contaminated by environmental noise, in some applications it may improve performance to employ Audio-Visual Speech Recognition (AVSR), in which recognition uses both audio information and mouth movements obtained from a video recording of the speaker’s face region. In this paper, model validation techniques, namely the holdout method, leave-one-out cross validation and bootstrap validation, are implemented to validate the performance of an AVSR system as well as to provide a comparison of the performance of the validation techniques themselves. A new speech data corpus is used, namely the Loughborough University Audio-Visual (LUNA-V) dataset that contains 10 speakers with five sets of samples uttered by each speaker. The database is divided into training and testing sets and processed in manners suitable for the validation techniques under investigation. The performance is evaluated using a range of different signal-to-noise ratio values using a variety of noise types obtained from the NOISEX-92 dataset.

Cite

CITATION STYLE

APA

Seong, T. W., Ibrahim, M. Z., Arshad, N. W. B., & Mulvaney, D. J. (2017). A comparison of model validation techniques for audio-visual speech recognition. In Lecture Notes in Electrical Engineering (Vol. 449, pp. 112–119). Springer Verlag. https://doi.org/10.1007/978-981-10-6451-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free