We consider regularization methods to improve the recently introduced backpropagation-decorrelation (BPDC) online algorithm for O(N) training of fully recurrent networks. While BPDC combines one-step error backpropagation and the usage of temporal memory of a network dynamics by means of decorrelation of activations, it is an online algorithm using only instantaneous states and errors. As enhancement we propose several ways to introduce memory in the algorithm for regularization. Simulation results of standard tasks show that different such strategies cause different effects either improving training performance at the cost of overfitting or degrading training errors. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Steil, J. J. (2005). Memory in backpropagation-decorrelation O(N) efficient online recurrent learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3697 LNCS, pp. 649–654). https://doi.org/10.1007/11550907_103
Mendeley helps you to discover research relevant for your work.