A new EDA by a gradient-driven density

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces the Gradient-driven Density Function (∇dD) approach, and its application to Estimation of Distribution Algorithms (EDAs). In order to compute the ∇dD, we also introduce the Expected Gradient Estimate (EGE), which is an estimation of the gradient, based on information from other individuals. Whilst EGE delivers an estimation of the gradient vector at the position of any individual, the ∇dD delivers a statistical model (e.g. the normal distribution) that allows the sampling of new individuals around the direction of the estimated gradient. Hence, in the proposed EDA, the gradient information is inherited to the new population. The computation of the EGE vector does not need additional function evaluations. It is worth noting that this paper focuses in black-box optimization. The proposed EDA is tested with a benchmark of 10 problems. The statistical tests show a competitive performance of the proposal.

Cite

CITATION STYLE

APA

Domínguez, I. S., Aguirre, A. H., & Valdez, S. I. (2014). A new EDA by a gradient-driven density. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8672, 352–361. https://doi.org/10.1007/978-3-319-10762-2_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free