Improving neuroevolution efficiency by surrogate model-based optimization with phenotypic distance kernels

N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In NeuroEvolution, the topologies of artificial neural networks are optimized with evolutionary algorithms to solve tasks in data regression, data classification, or reinforcement learning. One downside of NeuroEvolution is the large amount of necessary fitness evaluations, which might render it inefficient for tasks with expensive evaluations, such as real-time learning. For these expensive optimization tasks, surrogate model-based optimization is frequently applied as it features a good evaluation efficiency. While a combination of both procedures appears as a valuable solution, the definition of adequate distance measures for the surrogate modeling process is difficult. In this study, we will extend cartesian genetic programming of artificial neural networks by the use of surrogate model-based optimization. We propose different distance measures and test our algorithm on a replicable benchmark task. The results indicate that we can significantly increase the evaluation efficiency and that a phenotypic distance, which is based on the behavior of the associated neural networks, is most promising.

Cite

CITATION STYLE

APA

Stork, J., Zaefferer, M., & Bartz-Beielstein, T. (2019). Improving neuroevolution efficiency by surrogate model-based optimization with phenotypic distance kernels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11454 LNCS, pp. 504–519). Springer Verlag. https://doi.org/10.1007/978-3-030-16692-2_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free