Meta-Reinforcement Learning via Exploratory Task Clustering

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Meta-reinforcement learning (meta-RL) aims to quickly solve new RL tasks by leveraging knowledge from prior tasks. Previous studies often assume a single-mode homogeneous task distribution, ignoring possible structured heterogeneity among tasks. Such an oversight can hamper effective exploration and adaptation, especially with limited samples. In this work, we harness the structured heterogeneity among tasks via clustering to improve meta-RL, which facilitates knowledge sharing at the cluster level. To facilitate exploration, we also develop a dedicated cluster-level exploratory policy to discover task clusters via divide-and-conquer. The knowledge from the discovered clusters helps to narrow the search space of task-specific policy learning, leading to more sample-efficient policy adaptation. We evaluate the proposed method on environments with parametric clusters (e.g., rewards and state dynamics in the MuJoCo suite) and non-parametric clusters (e.g., control skills in the Meta-World suite). The results demonstrate strong advantages of our solution against a set of representative meta-RL methods.

Cite

CITATION STYLE

APA

Chu, Z., Cai, R., & Wang, H. (2024). Meta-Reinforcement Learning via Exploratory Task Clustering. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 11633–11641). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i10.29046

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free