Optimizing monotone functions can be difficult

13Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Extending previous analyses on function classes like linear functions, we analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean functions that are strictly monotone. Contrary to what one would expect, not all of these functions are easy to optimize. The choice of the constant c in the mutation probability p(n) = c/n can make a decisive difference. We show that if c < 1, then the (1+1) EA finds the optimum of every such function in Θ(n logn) iterations. For c = 1, we can still prove an upper bound of O(n3/2). However, for c > 33, we present a strictly monotone function such that the (1+1) EA with overwhelming probability does not find the optimum within 2Ω(n) iterations. This is the first time that we observe that a constant factor change of the mutation probability changes the run-time by more than constant factors. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Doerr, B., Jansen, T., Sudholt, D., Winzen, C., & Zarges, C. (2010). Optimizing monotone functions can be difficult. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6238 LNCS, pp. 42–51). https://doi.org/10.1007/978-3-642-15844-5_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free