A novel connectionist network for solving long time-lag prediction tasks

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditional Recurrent Neural Networks (RNNs) perform poorly on learning tasks involving long time-lag dependencies. More recent approaches such as LSTM and its variants significantly improve on RNNs ability to learn this type of problem. We present an alternative approach to encoding temporal dependencies that associates temporal features with nodes rather than state values, where the nodes explicitly encode dependencies over variable time delays. We show promising results comparing the network's performance to LSTM variants on an extended Reber grammar task. © Springer-Verlag Berlin Heidelberg 2009.

Cite

CITATION STYLE

APA

Johnson, K., & MacNish, C. (2009). A novel connectionist network for solving long time-lag prediction tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5866 LNAI, pp. 557–566). https://doi.org/10.1007/978-3-642-10439-8_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free