On polynomial sized MDP succinct policies

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Policies of Markov Decision Processes (MDPs) determine the next action to execute from the current state and, possibly, the history (the past states). When the number of states is large, succinct representations are often used to compactly represent both the MDPs and the policies in a reduced amount of space. In this paper, some problems related to the size of succinctly represented policies are analyzed. Namely, it is shown that some MDPs have policies that can only be represented in space super-polynomial in the size of the MDP, unless the polynomial hierarchy collapses. This fact motivates the study of the problem of deciding whether a given MDP has a policy of a given size and reward. Since some algorithms for MDPs work by finding a succinct representation of the value function, the problem of deciding the existence of a succinct representation of a value function of a given size and reward is also considered. © 2004 AI Access Foundation. All rights reserved.

Cite

CITATION STYLE

APA

Liberatore, P. (2004). On polynomial sized MDP succinct policies. Journal of Artificial Intelligence Research. American Association for Artificial Intelligence. https://doi.org/10.1613/jair.1134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free