Approximating Euclidean by Imprecise Markov Decision Processes

7Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Euclidean Markov decision processes are a powerful tool for modeling control problems under uncertainty over continuous domains. Finite state imprecise, Markov decision processes can be used to approximate the behavior of these infinite models. In this paper we address two questions: first, we investigate what kind of approximation guarantees are obtained when the Euclidean process is approximated by finite state approximations induced by increasingly fine partitions of the continuous state space. We show that for cost functions over finite time horizons the approximations become arbitrarily precise. Second, we use imprecise Markov decision process approximations as a tool to analyse and validate cost functions and strategies obtained by reinforcement learning. We find that, on the one hand, our new theoretical results validate basic design choices of a previously proposed reinforcement learning approach. On the other hand, the imprecise Markov decision process approximations reveal some inaccuracies in the learned cost functions.

Cite

CITATION STYLE

APA

Jaeger, M., Bacci, G., Bacci, G., Larsen, K. G., & Jensen, P. G. (2020). Approximating Euclidean by Imprecise Markov Decision Processes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12476 LNCS, pp. 275–289). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61362-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free