OCD: Online convergence detection for evolutionary multi-objective algorithms based on statistical testing

28Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Over the last decades, evolutionary algorithms (EA) have proven their applicability to hard and complex industrial optimization problems in many cases. However, especially in cases with high computational demands for fitness evaluations (FE), the number of required FE is often seen as a drawback of these techniques. This is partly due to lacking robust and reliable methods to determine convergence, which would stop the algorithm before useless evaluations are carried out. To overcome this drawback, we define a method for online convergence detection (OCD) based on statistical tests, which invokes a number of performance indicators and which can be applied on a stand-alone basis (no predefined Pareto fronts, ideal and reference points). Our experiments show the general applicability of OCD by analyzing its performance for different algorithmic setups and on different classes of test functions. Furthermore, we show that the number of FE can be reduced considerably - compared to common suggestions from literature - without significantly deteriorating approximation accuracy. © Springer-Verlag 2009.

Cite

CITATION STYLE

APA

Wagner, T., Trautmann, H., & Naujoks, B. (2010). OCD: Online convergence detection for evolutionary multi-objective algorithms based on statistical testing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5467 LNCS, pp. 198–215). https://doi.org/10.1007/978-3-642-01020-0_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free