Common benchmark functions for metaheuristic evaluation: A review

143Citations
Citations of this article
139Readers
Mendeley users who have this article in their library.

Abstract

In literature, benchmark test functions have been used for evaluating performance of metaheuristic algorithms. Algorithms that perform well on a set of numerical optimization problems are considered as effective methods for solving real-world problems. Different researchers choose different set of functions with varying configurations, as there exists no standard or universally agreed test-bed. This makes hard for researchers to select functions that can truly gauge the robustness of a metaheuristic algorithm which is being proposed. This review paper is an attempt to provide researchers with commonly used experimental settings, including selection of test functions with different modalities, dimensions, the number of experimental runs, and evaluation criteria. Hence, the proposed list of functions, based on existing literature, can be handily employed as an effective test-bed for evaluating either a new or modified variant of any existing metaheuristic algorithm. For embedding more complexity in the problems, these functions can be shifted or rotated for enhanced robustness.

Cite

CITATION STYLE

APA

Hussain, K., Salleh, M. N. M., Cheng, S., & Naseem, R. (2017). Common benchmark functions for metaheuristic evaluation: A review. International Journal on Informatics Visualization. Politeknik Negeri Padang. https://doi.org/10.30630/joiv.1.4-2.65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free