Memory in backpropagation-decorrelation O(N) efficient online recurrent learning

11Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider regularization methods to improve the recently introduced backpropagation-decorrelation (BPDC) online algorithm for O(N) training of fully recurrent networks. While BPDC combines one-step error backpropagation and the usage of temporal memory of a network dynamics by means of decorrelation of activations, it is an online algorithm using only instantaneous states and errors. As enhancement we propose several ways to introduce memory in the algorithm for regularization. Simulation results of standard tasks show that different such strategies cause different effects either improving training performance at the cost of overfitting or degrading training errors. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Steil, J. J. (2005). Memory in backpropagation-decorrelation O(N) efficient online recurrent learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3697 LNCS, pp. 649–654). https://doi.org/10.1007/11550907_103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free