Learning long term dependencies with recurrent neural networks

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recurrent neural networks (RNNs) unfolded in time are in theory able to map any open dynamical system. Still they are often blamed to be unable to identify long-term dependencies in the data. Especially when they are trained with backpropagation through time (BPTT) it is claimed that RNNs unfolded in time fail to learn inter-temporal influences more than ten time steps apart. This paper provides a disproof of this often cited statement. We show that RNNs and especially normalised recurrent neural networks (NRNNs) unfolded in time are indeed very capable of learning time lags of at least a hundred time steps. We further demonstrate that the problem of a vanishing gradient does not apply to these networks. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Schäfer, A. M., Udluft, S., & Zimmermann, H. G. (2006). Learning long term dependencies with recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4131 LNCS-I, pp. 71–80). Springer Verlag. https://doi.org/10.1007/11840817_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free