Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose an adaptive BFGS, which uses a self-adaptive scaling factor for the Hessian matrix and is equipped with nonmonotone strategy. Our experimental evaluation using different recurrent networks architectures provides evidence that the proposed approach trains successfully recurrent networks of various architectures, inheriting the benefits of the BFGS and, at the same time, alleviating some of its limitations. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Peng, C. C., & Magoulas, G. D. (2007). Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 259–268). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free