EDAs have been shown to perform well on a wide variety of single-objective optimization problems, for binary and real-valued variables. In this chapter we look into the extension of the EDA paradigm to multi-objective optimization. To this end, we focus the chapter around the introduction of a simple, but effective, EDA for multi-objective optimization: the naive MIDEA (mixture-based multi-objective iterated density-estimation evolutionary algorithm). The probabilistic model in this specific algorithm is a mixture distribution. Each component in the mixture is a univariate factorization. As will be shown in this chapter, mixture distributions allow for wide-spread exploration of a multi-objective front, whereas most operators focus on a specific part of the multi-objective front. This wide-spread exploration aids the important preservation of diversity in multi-objective optimization. To further improve and maintain the diversity that is obtained by the mixture distribution, a specialized diversity preserving selection operator is used in the naive MIDEA. We verify the effectiveness of the naive MIDEA in two different problem domains and compare it with two other well-known efficient multi-objective evolutionary algorithms (MOEAs).
CITATION STYLE
Bosman, P. A. N., & Thierens, D. (2007). Multi-objective Optimization with the Naive $$ \mathbb{M} $$ ID $$ \mathbb{E} $$ A. In Towards a New Evolutionary Computation (pp. 123–157). Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-32494-1_6
Mendeley helps you to discover research relevant for your work.