Time delay learning by gradient descent in Recurrent Neural Networks

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recurrent Neural Networks (RNNs) possess an implicit internal memory and are well adapted for time series forecasting. Unfortunately, the gradient descent algorithms which are commonly used for their training have two main weaknesses: the slowness and the difficulty of dealing with long-term dependencies in time series. Adding well chosen connections with time delays to the RNNs often reduces learning times and allows gradient descent algorithms to find better solutions. In this article, we demonstrate that the principle of time delay learning by gradient descent, although efficient for feed-forward neural networks and theoretically adaptable to RNNs, shown itself to be difficult to use in this latter case. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Boné, R., & Cardot, H. (2005). Time delay learning by gradient descent in Recurrent Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3697 LNCS, pp. 175–180). https://doi.org/10.1007/11550907_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free