Markov decision processes with multiple long-run average objectives

41Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider Markov decision processes (MDPs) with multiple long-run average objectives. Such MDPs occur in design problems where one wishes to simultaneously optimize several criteria, for example, latency and power. The possible trade-offs between the different objectives are characterized by the Pareto curve. We show that every Pareto optimal point can be e-approximated by a memoryless strategy, for all ε > 0. In contrast to the single-objective case, the memoryless strategy may require randomization. We show that the Pareto curve can be approximated (a) in polynomial time in the size of the MDP for irreducible MDPs; and (b) in polynomial space in the size of the MDP for all MDPs. Additionally, we study the problem if a given value vector is realizable by any strategy, and show that it can be decided in polynomial time for irreducible MDPs and in NP for all MDPs. These results provide algorithms for design exploration in MDP models with multiple long-run average objectives. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Chatterjee, K. (2007). Markov decision processes with multiple long-run average objectives. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4855 LNCS, pp. 473–484). Springer Verlag. https://doi.org/10.1007/978-3-540-77050-3_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free