BI-POMDP" bounded, incremental partially-observable Markov-model planning

20Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given the problem of planning actions for situations with uncertainty about the action outcomes, Markov models can effectively model this uncertainty and offer optimal actions. When the information about the world state is itself uncertain, partially observable Markov models are an appropriate extension to the basic Markov model. However, finding optimal actions for partially observable Markov models is a computationally difficult problem that in practice borders on intractability. Approximate or heuristic approaches, on the other hand, lose any guarantee of optimality or even any indication of how far from optimal they might be. In this paper, we present an incremental, search-based approximation for partially observable Markov models. The search is based on an incremental AND-OR search, using heuristic functions based on the underlying Markov model, which is more easily solved. In addition, the search provides a bound on the possible error of the approximation. We illustrate the method with results on problems taken from the related literature.

Cite

CITATION STYLE

APA

Washington, R. (1997). BI-POMDP" bounded, incremental partially-observable Markov-model planning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1348 LNAI, pp. 440–451). Springer Verlag. https://doi.org/10.1007/3-540-63912-8_105

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free