RePReL: Integrating Relational Planning and Reinforcement Learning for Effective Abstraction

27Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

State abstraction is necessary for better task transfer in complex reinforcement learning environments. Inspired by the benefit of state abstraction in MAXQ and building upon hybrid planner-RL architectures, we propose RePReL, a hierarchical framework that leverages a relational planner to provide useful state abstractions. Our experiments demonstrate that the abstractions enable faster learning and efficient transfer across tasks. More importantly, our framework enables the application of standard RL approaches for learning in structured domains. The benefit of using the state abstractions is critical in relational settings, where the number and/or types of objects are not fixed apriori. Our experiments clearly show that RePReL framework not only achieves better performance and efficient learning on the task at hand but also demonstrates better generalization to unseen tasks.

Cite

CITATION STYLE

APA

Kokel, H., Manoharan, A., Natarajan, S., Ravindran, B., & Tadepalli, P. (2021). RePReL: Integrating Relational Planning and Reinforcement Learning for Effective Abstraction. In Proceedings International Conference on Automated Planning and Scheduling, ICAPS (Vol. 2021-August, pp. 533–541). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/icaps.v31i1.16001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free