Truncation selection and gaussian EDA: Bounds for sustainable progress in high-dimensional spaces

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In real-valued estimation-of-distribution algorithms, the Gaussian distribution is often used along with maximum likelihood (ML) estimation of its parameters. Such a process is highly prone to premature convergence. The simplest method for preventing premature convergence of Gaussian distribution is enlarging the maximum likelihood estimate of σ by a constant factor k each generation. Such a factor should be large enough to prevent convergence on slopes of the fitness function, but should not be too large to allow the algorithm converge in the neighborhood of the optimum. Previous work showed that for truncation selection such admissible k exists in 1D case. In this article it is shown experimentaly, that for the Gaussian EDA with truncation selection in high-dimensional spaces no admissible k exists! © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Pošík, P. (2008). Truncation selection and gaussian EDA: Bounds for sustainable progress in high-dimensional spaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4974 LNCS, pp. 525–534). https://doi.org/10.1007/978-3-540-78761-7_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free