Robustly Learning Composable Options in Deep Reinforcement Learning

20Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hierarchical reinforcement learning (HRL) is only effective for long-horizon problems when high-level skills can be reliably sequentially executed. Unfortunately, learning reliably composable skills is difficult, because all the components of every skill are constantly changing during learning. We propose three methods for improving the composability of learned skills: representing skill initiation regions using a combination of pessimistic and optimistic classifiers; learning re-targetable policies that are robust to non-stationary subgoal regions; and learning robust option policies using model-based RL. We test these improvements on four sparse-reward maze navigation tasks involving a simulated quadrupedal robot. Each method successively improves the robustness of a baseline skill discovery method, substantially outperforming state-of-the-art flat and hierarchical methods.

Cite

CITATION STYLE

APA

Bagaria, A., Senthil, J., Slivinski, M., & Konidaris, G. (2021). Robustly Learning Composable Options in Deep Reinforcement Learning. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2161–2169). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/298

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free