Efficient deep reinforcement learning via adaptive policy transfer

34Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transfer learning has shown great potential to accelerate Reinforcement Learning (RL) by leveraging prior knowledge from past learned policies of relevant tasks. Existing approaches either transfer previous knowledge by explicitly computing similarities between tasks or select appropriate source policies to provide guided explorations. However, how to directly optimize the target policy by alternatively utilizing knowledge from appropriate source policies without explicitly measuring the similarities is currently missing. In this paper, we propose a novel Policy Transfer Framework (PTF) by taking advantage of this idea. PTF learns when and which source policy is the best to reuse for the target policy and when to terminate it by modeling multi-policy transfer as an option learning problem. PTF can be easily combined with existing DRL methods and experimental results show it significantly accelerates RL and surpasses state-of-the-art policy transfer methods in terms of learning efficiency and final performance in both discrete and continuous action spaces.

Cite

CITATION STYLE

APA

Yang, T., Hao, J., Meng, Z., Zhang, Z., Hu, Y., Chen, Y., … Peng, J. (2020). Efficient deep reinforcement learning via adaptive policy transfer. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3094–3100). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free