Recurrent neural networks with external memory for spoken language understanding

32Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recurrent Neural Networks (RNNs) have become increasingly popular for the task of language understanding. In this task, a semantic tagger is deployed to associate a semantic label to each word in an input sequence. The success of RNN may be attributed to its ability to memorise long-term dependence that relates the current-time semantic label prediction to the observations many time instances away. However, the memory capacity of simple RNNs is limited because of the gradient vanishing and exploding problem.We propose to use an external memory to improve memorisation capability of RNNs. Experiments on the ATIS dataset demonstrated that the proposed model was able to achieve the state-of-the-art results. Detailed analysis may provide insights for future research.

Cite

CITATION STYLE

APA

Peng, B., Yao, K., Jing, L., & Wong, K. F. (2015). Recurrent neural networks with external memory for spoken language understanding. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9362, pp. 25–35). Springer Verlag. https://doi.org/10.1007/978-3-319-25207-0_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free