In this paper, we propose an adaptive BFGS, which uses a self-adaptive scaling factor for the Hessian matrix and is equipped with nonmonotone strategy. Our experimental evaluation using different recurrent networks architectures provides evidence that the proposed approach trains successfully recurrent networks of various architectures, inheriting the benefits of the BFGS and, at the same time, alleviating some of its limitations. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Peng, C. C., & Magoulas, G. D. (2007). Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 259–268). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_27
Mendeley helps you to discover research relevant for your work.