TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Transferring knowledge among various environments is important for efficiently learning multiple tasks online. Most existing methods directly use the previously learned models or previously learned optimal policies to learn new tasks. However, these methods may be inefficient when the underlying models or optimal policies are substantially different across tasks. In this paper, we propose Template Learning (TempLe), a PAC-MDP method for multi-task reinforcement learning that could be applied to tasks with varying state/action space without prior knowledge of inter-task mappings. TempLe gains sample efficiency by extracting similarities of the transition dynamics across tasks even when their underlying models or optimal policies have limited commonalities. We present two algorithms for an “online” and a “finite-model” setting respectively. We prove that our proposed TempLe algorithms achieve much lower sample complexity than single-task learners or state-of-the-art multi-task methods. We show via systematically designed experiments that our TempLe method universally outperforms the state-of-the-art multi-task methods (PAC-MDP or not) in various settings and regimes.

Cite

CITATION STYLE

APA

Sun, Y., Yin, X., & Huang, F. (2021). TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11A, pp. 9765–9773). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17174

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free