A comparison of RNN LM and FLM for Russian speech recognition

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the paper, we describe a research of recurrent neural network (RNN) language model (LM) for N-best list rescoring for automatic continuous Russian speech recognition and make a comparison of it with factored language model (FLM). We tried RNN with different number of units in the hidden layer. For FLM creation, we used five linguistic factors: word, lemma, stem, part-of-speech, and morphological tag. All models were trained on the text corpus of 350M words. Also we made linear interpolation of RNN LM and FLM with the baseline 3-gram LM. We achieved the relative WER reduction of 8% using FLM and 14% relative WER reduction using RNN LM with respect to the baseline model.

Cite

CITATION STYLE

APA

Kipyatkova, I., & Karpov, A. (2015). A comparison of RNN LM and FLM for Russian speech recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9319, pp. 42–50). Springer Verlag. https://doi.org/10.1007/978-3-319-23132-7_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free