PMT: Opposition-Based Learning Technique for Enhancing Meta-Heuristic Performance

11Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Meta-heuristic algorithms have shown promising performance in solving sophisticated real-world optimization problems. Nevertheless, many meta-heuristic algorithms are still suffering from a low convergence rate because of the poor balance between exploration (i.e., roaming new potential search areas) and exploitation (i.e., exploiting the existing neighbors). In some complex problems, the convergence rate can still be poor owing to becoming trapped in local optima. Addressing these issues, this research proposes a new general opposition-based learning (OBL) technique inspired by a natural phenomenon of parallel mirrors systems called the parallel mirrors technique (PMT). Like existing OBL-based approaches, the PMT generates new potential solutions based on the currently selected candidate. Unlike existing OBL-based techniques, the PMT generates more than one candidate in multiple solution-space directions. To evaluate the PMT's performance and adaptability, the PMT has been applied to four contemporary meta-heuristic algorithms, differential evolution (DE), particle swarm optimization (PSO), simulated annealing (SA), and whale optimization algorithm (WOA), to solve 15 well-known benchmark functions. The experimentally, the PMT shows promising results by accelerating the convergence rate against the original algorithms with the same number of fitness evaluations.

Cite

CITATION STYLE

APA

Alamri, H. S., & Zamli, K. Z. (2019). PMT: Opposition-Based Learning Technique for Enhancing Meta-Heuristic Performance. IEEE Access, 7, 97653–97672. https://doi.org/10.1109/ACCESS.2019.2925088

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free