Noisy training for deep neural networks in speech recognition

85Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however, may lead to serious over-fitting and hence miserable performance degradation in adverse acoustic conditions such as those with high ambient noises. We propose a noisy training approach to tackle this problem: by injecting moderate noises into the training data intentionally and randomly, more generalizable DNN models can be learned. This ‘noise injection’ technique, although known to the neural computation community already, has not been studied with DNNs which involve a highly complex objective function. The experiments presented in this paper confirm that the noisy training approach works well for the DNN model and can provide substantial performance improvement for DNN-based speech recognition.

Cite

CITATION STYLE

APA

Yin, S., Liu, C., Zhang, Z., Lin, Y., Wang, D., Tejedor, J., … Li, Y. (2015). Noisy training for deep neural networks in speech recognition. Eurasip Journal on Audio, Speech, and Music Processing, 2015(1), 1–14. https://doi.org/10.1186/s13636-014-0047-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free