A competitive learning-based Grey wolf Optimizer for engineering problems and its application to multi-layer perceptron training

15Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

This article presents a competitive learning-based Grey Wolf Optimizer (Clb-GWO) formulated through the introduction of competitive learning strategies to achieve a better trade-off between exploration and exploitation while promoting population diversity through the design of difference vectors. The proposed method integrates population sub-division into majority groups and minority groups with a dual search system arranged in a selective complementary manner. The proposed Clb-GWO is tested and validated through the recent CEC2020 and CEC2019 benchmarking suites followed by the optimal training of multi-layer perceptron’s (MLPs) with five classification datasets and three function approximation datasets. Clb-GWO is compared against the standard version of GWO, five of its latest variants and two modern meta-heuristics. The benchmarking results and the MLP training results demonstrate the robustness of Clb-GWO. The proposed method performed competitively compared to all its competitors with statistically significant performance for the benchmarking tests. The performance of Clb-GWO the classification datasets and the function approximation datasets was excellent with lower error rates and least standard deviation rates.

Cite

CITATION STYLE

APA

Aala Kalananda, V. K. R., & Komanapalli, V. L. N. (2023). A competitive learning-based Grey wolf Optimizer for engineering problems and its application to multi-layer perceptron training. Multimedia Tools and Applications, 82(26), 40209–40267. https://doi.org/10.1007/s11042-023-15146-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free