An efficient approach to model-based hierarchical reinforcement learning

15Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.

Abstract

We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the stateof-the-art algorithms, and scales well in very large problems.

Cite

CITATION STYLE

APA

Li, Z., Narayan, A., & Leong, T. Y. (2017). An efficient approach to model-based hierarchical reinforcement learning. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 3583–3589). AAAI press. https://doi.org/10.1609/aaai.v31i1.11024

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free