Improving the learning speed in 2-Layered LSTM network by estimating the configuration of hidden units and optimizing weights initialization

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes a method to initialize the LSTM network weights and estimate the configuration of hidden units in order to improve training time for function approximation tasks. The motivation of this method is based on the behavior of the hidden units and the complexity of the function to be approximated. The results obtained for 1-D and 2-D functions show that the proposed methodology improves the network performance, stabilizing the training phase. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Corrêa, D. C., Levada, A. L. M., & Saito, J. H. (2008). Improving the learning speed in 2-Layered LSTM network by estimating the configuration of hidden units and optimizing weights initialization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 109–118). https://doi.org/10.1007/978-3-540-87536-9_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free