Periodic Intra-ensemble Knowledge Distillation for Reinforcement Learning

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Off-policy ensemble reinforcement learning (RL) methods have demonstrated impressive results across a range of RL benchmark tasks. Recent works suggest that directly imitating experts’ policies in a supervised manner before or during the course of training enables faster policy improvement for an RL agent. Motivated by these recent insights, we propose Periodic Intra-Ensemble Knowledge Distillation (PIEKD). PIEKD is a learning framework that uses an ensemble of policies to act in the environment while periodically sharing knowledge amongst policies in the ensemble through knowledge distillation. Our experiments demonstrate that PIEKD improves upon a state-of-the-art RL method in sample efficiency on several challenging MuJoCo benchmark tasks. Additionally, we perform ablation studies to better understand PIEKD.

Cite

CITATION STYLE

APA

Hong, Z. W., Nagarajan, P., & Maeda, G. (2021). Periodic Intra-ensemble Knowledge Distillation for Reinforcement Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12975 LNAI, pp. 87–103). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-86486-6_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free