Safety-informed mutations for evolutionary deep reinforcement learning

5Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evolutionary Algorithms have been combined with Deep Reinforcement Learning (DRL) to address the limitations of the two approaches while leveraging their benefits. In this paper, we discuss objective-informed mutations to bias the evolutionary population toward exploring the desired objective. We focus on Safe DRL domains to show how these mutations exploit visited unsafe states to search for safer actions. Empirical evidence on a 12 degrees of freedom locomotion benchmark and a practical navigation task, confirm that we improve the safety of the policy while maintaining comparable return with the original DRL algorithm.

Cite

CITATION STYLE

APA

Marchesini, E., & Amato, C. (2022). Safety-informed mutations for evolutionary deep reinforcement learning. In GECCO 2022 Companion - Proceedings of the 2022 Genetic and Evolutionary Computation Conference (pp. 1966–1970). Association for Computing Machinery, Inc. https://doi.org/10.1145/3520304.3533980

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free