Improving sampling in evolution strategies through mixture-based distributions built from past problem instances

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The notion of learning from different problem instances, although an old and known one, has in recent years regained popularity within the optimization community. Notable endeavors have been drawing inspiration from machine learning methods as a means for algorithm selection and solution transfer. However, surprisingly approaches which are centered around internal sampling models have not been revisited. Even though notable algorithms have been established in the last decades. In this work, we progress along this direction by investigating a method that allows us to learn an evolutionary search strategy reflecting rough characteristics of a fitness landscape. This latter model of a search strategy is represented through a flexible mixture-based distribution, which can subsequently be transferred and adapted for similar problems of interest. We validate this approach in two series of experiments in which we first demonstrate the efficacy of the recovered distributions and subsequently investigate the transfer with a systematic from the literature to generate benchmarking scenarios.

Cite

CITATION STYLE

APA

Friess, S., Tiňo, P., Menzel, S., Sendhoff, B., & Yao, X. (2020). Improving sampling in evolution strategies through mixture-based distributions built from past problem instances. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12269 LNCS, pp. 583–596). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58112-1_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free