An optimized second order stochastic learning algorithm for neural network training

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The performance of a neural network depends critically on its model structure and the corresponding learning algorithm. This paper proposes bounded stochastic diagonal Levenberg-Marquardt (B-SDLM), an improved second order stochastic learning algorithm for supervised neural network training. The algorithm consists of a single hyperparameter only and requires negligible additional computations compared to conventional stochastic gradient descent (SGD) method while ensuring better learning stability. The experiments have shown very fast convergence and better generalization ability achieved by our proposed algorithm, outperforming several other learning algorithms.

Cite

CITATION STYLE

APA

Khalil-Hani, M., Liew, S. S., & Bakhteri, R. (2015). An optimized second order stochastic learning algorithm for neural network training. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9489, pp. 38–45). Springer Verlag. https://doi.org/10.1007/978-3-319-26532-2_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free