Deep conservative policy iteration

12Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Conservative Policy Iteration (CPI) is a founding algorithm of Approximate Dynamic Programming (ADP). Its core principle is to stabilize greediness through stochastic mixtures of consecutive policies. It comes with strong theoretical guarantees, and inspired approaches in deep Reinforcement Learning (RL). However, CPI itself has rarely been implemented, never with neural networks, and only experimented on toy problems. In this paper, we show how CPI can be practically combined with deep RL with discrete actions, in an off-policy manner. We also introduce adaptive mixture rates inspired by the theory. We experiment thoroughly the resulting algorithm on the simple Cartpole problem, and validate the proposed method on a representative subset of Atari games. Overall, this work suggests that revisiting classic ADP may lead to improved and more stable deep RL algorithms.

Cite

CITATION STYLE

APA

Vieillard, N., Pietquin, O., & Geist, M. (2020). Deep conservative policy iteration. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6070–6077). AAAI press. https://doi.org/10.1609/aaai.v34i04.6070

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free