Omega-regular objectives in model-free reinforcement learning

80Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We provide the first solution for model-free reinforcement learning of ω -regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω -regular objectives to an almost-sure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. We compile ω -regular properties into limit-deterministic Büchi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.

Cite

CITATION STYLE

APA

Hahn, E. M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., & Wojtczak, D. (2019). Omega-regular objectives in model-free reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11427 LNCS, pp. 395–412). Springer Verlag. https://doi.org/10.1007/978-3-030-17462-0_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free