Penalty function methods for constrained optimization with genetic algorithms: A statistical analysis

52Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Genetic algorithms (GAs) have been successfully applied to numerical optimization problems. Since GAs are usually designed for unconstrained optimization, they have to be adapted to tackle the constrained cases, i.e. those in which not all representable solutions are valid. In this work we experimentally compare 5 ways to attain such adaptation. Our analysis relies on the usual method of selecting an arbitrary suite of test functions (25 of these) albeit applying a methodology which allows us to determine which method is better within statistical certainty limits. In order to do this we have selected 5 penalty function strategies; for each of these we have further selected 3 particular GAs. The behavior of each strategy and the associated GAs is then established by extensively sampling the function suite and finding the worst case best values from Chebyshev’s theorem. We have found some counterintuitive results which we discuss and try to explain.

Cite

CITATION STYLE

APA

Kuri-Morales, A. F., & Gutiérrez-García, J. (2002). Penalty function methods for constrained optimization with genetic algorithms: A statistical analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2313, pp. 108–117). Springer Verlag. https://doi.org/10.1007/3-540-46016-0_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free