Monte Carlo value iteration for continuous-state POMDPs

67Citations
Citations of this article
105Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot's state space and the corresponding belief space, and avoids inefficient a priori discretization of the state space as a grid. Both theoretical results and preliminary experimental results indicate that MCVI is a promising new approach for robot motion planning under uncertainty. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Bai, H., Hsu, D., Lee, W. S., & Ngo, V. A. (2010). Monte Carlo value iteration for continuous-state POMDPs. In Springer Tracts in Advanced Robotics (Vol. 68, pp. 175–191). https://doi.org/10.1007/978-3-642-17452-0_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free