Adaptive levenberg–marquardt algorithm: A new optimization strategy for levenberg–marquardt neural networks

37Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Engineering data are often highly nonlinear and contain high-frequency noise, so the Le-venberg–Marquardt (LM) algorithm may not converge when a neural network optimized by the algorithm is trained with engineering data. In this work, we analyzed the reasons for the LM neural network’s poor convergence commonly associated with the LM algorithm. Specifically, the effects of different activation functions such as Sigmoid, Tanh, Rectified Linear Unit (RELU) and Paramet-ric Rectified Linear Unit (PRLU) were evaluated on the general performance of LM neural networks, and special values of LM neural network parameters were found that could make the LM algorithm converge poorly. We proposed an adaptive LM (AdaLM) algorithm to solve the problem of the LM algorithm. The algorithm coordinates the descent direction and the descent step by the iteration number, which can prevent falling into the local minimum value and avoid the influence of the parameter state of LM neural networks. We compared the AdaLM algorithm with the traditional LM algorithm and its variants in terms of accuracy and speed in the context of testing common datasets and aero-engine data, and the results verified the effectiveness of the AdaLM algorithm.

Cite

CITATION STYLE

APA

Yan, Z., Zhong, S., Lin, L., & Cui, Z. (2021). Adaptive levenberg–marquardt algorithm: A new optimization strategy for levenberg–marquardt neural networks. Mathematics, 9(17). https://doi.org/10.3390/math9172176

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free