Application of LSTM neural networks in language modelling

68Citations
Citations of this article
92Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial neural networks have become state-of-the-art in the task of language modelling on a small corpora. While feed-forward networks are able to take into account only a fixed context length to predict the next word, recurrent neural networks (RNN) can take advantage of all previous words. Due the difficulties in training of RNN, the way could be in using Long Short Term Memory (LSTM) neural network architecture. In this work, we show an application of LSTM network with extensions on a language modelling task with Czech spontaneous phone calls. Experiments show considerable improvements in perplexity and WER on recognition system over n-gram baseline. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Soutner, D., & Müller, L. (2013). Application of LSTM neural networks in language modelling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8082 LNAI, pp. 105–112). https://doi.org/10.1007/978-3-642-40585-3_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free