Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation

20Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Recently, deep reinforcement learning (RL) has achieved remarkable empirical success by integrating deep neural networks into RL frameworks. However, these algorithms often require a large number of training samples and admit little theoretical understanding. To mitigate these issues, we propose a theoretically principled nearest neighbor (NN) function approximator that can improve the value networks in deep RL methods. Inspired by human similarity judgments, the NN approximator estimates the action values using rollouts on past observations and can provably obtain a small regret bound that depends only on the intrinsic complexity of the environment. We present (1) Nearest Neighbor Actor-Critic (NNAC), an online policy gradient algorithm that demonstrates the practicality of combining function approximation with deep RL, and (2) a plug-and-play NN update module that aids the training of existing deep RL methods. Experiments on classical control and MuJoCo locomotion tasks show that the NN-accelerated agents achieve higher sample efficiency and stability than the baseline agents. Based on its theoretical benefits, we believe that the NN approximator can be further applied to other complex domains to speed-up learning.

Cite

CITATION STYLE

APA

Shen, J., & Yang, L. F. (2021). Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11A, pp. 9558–9566). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17151

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free