Linkage learning in estimation of distribution algorithms

7Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This chapter explains how structural learning performed by multi-variate estimation of distribution algorithms (EDAs) while building their probabilistic models is a form of linkage learning. We then show how multi-variate EDAs linkage learning mechanisms can be misled with the help of two test problems; the concatenated parity function (CPF), and the concatenated parity/trap function (CP/TF). Although these functions are separable, with bounded complexity and uniformly scaled sub-function contributions, the hierarchical Bayesian Optimization Algorithm (hBOA) scales exponentially on both. We argue that test problems containing parity functions are hard for EDAs because there are no interactions in the contribution to fitness between any strict subset of a parity function's bits. This means that as population sizes increase the dependency between variable values for any strict subset of a parity function's bits decreases. Unfortunately most EDAs including hBOA search for their models by looking for dependencies between pairs of variables (at least at first). We make suggestions on how EDAs could be adjusted to handle parity problems, but also comment on the apparently inevitable computational cost. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Coffin, D., & Smith, R. E. (2008). Linkage learning in estimation of distribution algorithms. Studies in Computational Intelligence, 157, 141–156. https://doi.org/10.1007/978-3-540-85068-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free