Teaching learning-based whale optimization algorithm for multi-layer perceptron neural network training

21Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an improved teaching learning-based whale optimization algorithm (TSWOA) used the simplex method. First of all, the combination of WOA algorithm and teaching learning-based algorithm not only achieves a better balance between exploration and exploitation of WOA, but also makes whales have self-learning ability from the biological background, and greatly enriches the theory of the original WOA algorithm. Secondly, the WOA algorithm adds the simplex method to optimize the current worst unit, averting the agents to search at the boundary, and increasing the convergence accuracy and speed of the algorithm. To evaluate the performance of the improved algorithm, the TSWOA algorithm is employed to train the multi-layer perceptron (MLP) neural network. It is a difficult thing to propose a well-pleasing and valid algorithm to optimize the multi-layer perceptron neural network. Fifteen different data sets were selected from the UCI machine learning knowledge and the statistical results were compared with GOA, GSO, SSO, FPA, GA and WOA, severally. The statistical results display that better performance of TSWOA compared to WOA and several well-established algorithms for training multi-layer perceptron neural networks.

Cite

CITATION STYLE

APA

Zhou, Y., Niu, Y., Luo, Q., & Jiang, M. (2020). Teaching learning-based whale optimization algorithm for multi-layer perceptron neural network training. Mathematical Biosciences and Engineering, 17(5), 5987–6025. https://doi.org/10.3934/MBE.2020319

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free