Learning and solving regular decision processes

12Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Regular Decision Processes (RDPs) are a recently introduced model that extends MDPs with non-Markovian dynamics and rewards. The non-Markovian behavior is restricted to depend on regular properties of the history. These can be specified using regular expressions or formulas in linear dynamic logic over finite traces. Fully specified RDPs can be solved by compiling them into an appropriate MDP. Learning RDPs from data is a challenging problem that has yet to be addressed, on which we focus in this paper. Our approach rests on a new representation for RDPs using Mealy Machines that emit a distribution and an expected reward for each state-action pair. Building on this representation, we combine automata learning techniques with history clustering to learn such a Mealy Machine and solve it by adapting MCTS to it. We empirically evaluate this approach, demonstrating its feasibility.

Cite

CITATION STYLE

APA

Abadi, E., & Brafman, R. I. (2020). Learning and solving regular decision processes. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 1948–1954). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/270

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free