On the complexity of finite memory policies for markov decision processes

7Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider some complexity questions concerning a model of uncertainty known as Markov decision processes. Our results concern the problem of constructing optimal policies under a criterion Of optimality defined in terms of constraints on the behavior of the process. The constraints are described by regular languages, and the motivation goes from robot motion planning. It is known that, in the case of perfect information, optimal policies under the traditional cost criteria can be found among Markov policies and in polytime. We show, firstly, that for the behavior criterion optimal policies are not Markovian for finite as well as infinite horizon. On the other hand, optimal policies in this case lie in the class of finite memory policies defined in the paper, and can be found in polytime. We remark that in the case of partial information, finite memory policies cannot be optimal in the general situation. Nevertheless, the class of finite memory policies seems to be of interest for probabilistic policies: though probabilistic policies are not better than deterministic ones in the general class of history remembering policies, the former ones can be better in the class of finite memory policies.

Cite

CITATION STYLE

APA

Beauquier, D., Burago, D., & Slissenko, A. (1995). On the complexity of finite memory policies for markov decision processes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 969, pp. 191–200). Springer Verlag. https://doi.org/10.1007/3-540-60246-1_125

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free