Review of second-order optimization techniques in artificial neural networks backpropagation

58Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Second-order optimization technique is the advances of first-order optimization in neural networks. It provides an addition curvature information of an objective function that adaptively estimate the step-length of optimization trajectory in training phase of neural network. With the additional information, it reduces training iteration and achieves fast convergence with less tuning of hyper-parameter. The current improved memory allocation and computing power further motivates machine learning practitioners to revisit the benefits of second-order optimization techniques. This paper covers the review on second-order optimization techniques that involve Hessian calculation for neural network training. It reviews the basic theory of Newton method, quasi-Newton, Gauss-Newton, Levenberg-Marquardt, Approximate Greatest Descent and Hessian-Free optimization. This paper summarizes the feasibility and performance of optimization techniques in deep neural network training. Comments and suggestions are highlighted for second-order optimization techniques in artificial neural network training in term of advantages and limitations.

Cite

CITATION STYLE

APA

Tan, H. H., & Lim, K. H. (2019). Review of second-order optimization techniques in artificial neural networks backpropagation. In IOP Conference Series: Materials Science and Engineering (Vol. 495). Institute of Physics Publishing. https://doi.org/10.1088/1757-899X/495/1/012003

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free