Graceful scaling on uniform versus steep-tailed noise

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the (μ + 1)-EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher μ). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the (μ + 1)-EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the (μ + 1)-EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the (μ + 1)-EA on OneMax does scale gracefully, indicating the importance of the noise model.

Cite

CITATION STYLE

APA

Friedrich, T., Kötzing, T., Krejca, M. S., & Sutton, A. M. (2016). Graceful scaling on uniform versus steep-tailed noise. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9921 LNCS, pp. 761–770). Springer Verlag. https://doi.org/10.1007/978-3-319-45823-6_71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free