This paper presents language models based on Long Short-Term Memory (LSTM) neural networks for very large vocabulary continuous Russian speech recognition. We created neural networks with various numbers of units in hidden and projection layers using different optimization methods. Obtained LSTM-based language models were used for N-best list rescoring. As well we tested a linear interpolation of LSTM language model with the baseline 3-gram language model and achieved 22% relative reduction of the word error rate with respect to the baseline 3-gram model.
CITATION STYLE
Kipyatkova, I. (2019). LSTM-based language models for very large vocabulary continuous russian speech recognition system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11658 LNAI, pp. 219–226). Springer Verlag. https://doi.org/10.1007/978-3-030-26061-3_23
Mendeley helps you to discover research relevant for your work.