DGPO: Discovering Multiple Strategies with Diversity-Guided Policy Optimization

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Most reinforcement learning algorithms seek a single optimal strategy that solves a given task. However, it can often be valuable to learn a diverse set of solutions, for instance, to make an agent’s interaction with users more engaging, or improve the robustness of a policy to an unexpected perturbance. We propose Diversity-Guided Policy Optimization (DGPO), an on-policy algorithm that discovers multiple strategies for solving a given task. Unlike prior work, it achieves this with a shared policy network trained over a single run. Specifically, we design an intrinsic reward based on an information-theoretic diversity objective. Our final objective alternately constraints on the diversity of the strategies and on the extrinsic reward. We solve the constrained optimization problem by casting it as a probabilistic inference task and use policy iteration to maximize the derived lower bound. Experimental results show that our method efficiently discovers diverse strategies in a wide variety of reinforcement learning tasks. Compared to baseline methods, DGPO achieves comparable rewards, while discovering more diverse strategies, and often with better sample efficiency.

Cite

CITATION STYLE

APA

Chen, W., Huang, S., Chiang, Y., Pearce, T., Tu, W. W., Chen, T., & Zhu, J. (2024). DGPO: Discovering Multiple Strategies with Diversity-Guided Policy Optimization. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 11390–11398). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i10.29019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free