Sample Efficient Reinforcement Learning with REINFORCE

47Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Policy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory. However, prior works have either required exact gradients or state-action visitation measure based mini-batch stochastic gradients with a diverging batch size, which limit their applicability in practical scenarios. In this paper, we consider classical policy gradient methods that compute an approximate gradient with a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization, along with the widely-used REINFORCE gradient estimation procedure. By controlling the number of “bad” episodes and resorting to the classical doubling trick, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate. These provide the first set of global convergence and sample efficiency results for the well-known REINFORCE algorithm and contribute to a better understanding of its performance in practice.

Cite

CITATION STYLE

APA

Zhang, J., Kim, J., O’Donoghue, B., & Boyd, S. (2021). Sample Efficient Reinforcement Learning with REINFORCE. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 12B, pp. 10887–10895). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i12.17300

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free