Evolutionary computation for reinforcement learning

35Citations
Citations of this article
136Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Algorithms for evolutionary computation, which simulate the process of natural selection to solve optimization problems, are an effective tool for discovering high-performing reinforcement-learning policies. Because they can automatically find good representations, handle continuous action spaces, and cope with partial observability, evolutionary reinforcement-learning approaches have a strong empirical track record, sometimes significantly outperforming temporal-difference methods. This chapter surveys research on the application of evolutionary computation to reinforcement learning, overviewing methods for evolving neural-network topologies and weights, hybrid methods that also use temporal-difference methods, coevolutionary methods for multi-agent settings, generative and developmental systems, and methods for on-line evolutionary reinforcement learning.

Cite

CITATION STYLE

APA

Whiteson, S. (2012). Evolutionary computation for reinforcement learning. In Adaptation, Learning, and Optimization (Vol. 12, pp. 325–355). Springer Verlag. https://doi.org/10.1007/978-3-642-27645-3_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free