Benchmarking Differential Evolution

N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Testing can be a valuable tool for understanding how and why an algorithm performs as it does. For example, testing can measure how an algo-rithm's performance depends on objective function characteristics like dimension , number of local minima, degree of parameter dependence, dynamic range of parameters, constraints, quantization, noise, etc. Testing can also show which control parameter combinations are the most effective. This knowledge can be particularly useful, since finding an effective set of control parameter combinations is itself a multi-objective optimization problem in which fast and reliable convergence are conflicting objectives. In addition, test functions are a convenient way to compare one algo-rithm's performance to that of another. Furthermore, testing can lead to new insights that can be exploited to enhance an optimizer's performance. Despite its value, testing can be misleading if results are not correctly interpreted. For example, the dimension of some test functions can be arbitrarily increased to probe an algorithm's scaling behavior. In all but the simplest cases, however, changing an objective function's dimension also changes its other characteristics as well (e.g., number of local minima, dynamic range of optimal function parameter values, etc.). Thus, an algo-rithm's response to a change in test function dimension must be understood in the context of the accompanying alterations to the objective function landscape. Test beds that consist entirely of separable functions are another example in which test functions provide misleading clues about an algorithm's versatility. For many years, most of the functions used for testing GAs were separable (see Sect. 1.2.3). Consequently, GAs with low mutation rates performed well on these early test beds, leading to high expectations for their success as numerical optimizers. It is now clear that these early successes do not extend to parameter-dependent functions because GAs are limited by their lack of a scheme for correlating mutations (see Sects. 1.2.3 and 2.6.2) (Salomon 1996).

Cite

CITATION STYLE

APA

Benchmarking Differential Evolution. (2006). In Differential Evolution (pp. 135–187). Springer-Verlag. https://doi.org/10.1007/3-540-31306-0_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free