Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot's state space and the corresponding belief space, and avoids inefficient a priori discretization of the state space as a grid. Both theoretical results and preliminary experimental results indicate that MCVI is a promising new approach for robot motion planning under uncertainty. © 2010 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Bai, H., Hsu, D., Lee, W. S., & Ngo, V. A. (2010). Monte Carlo value iteration for continuous-state POMDPs. In Springer Tracts in Advanced Robotics (Vol. 68, pp. 175–191). https://doi.org/10.1007/978-3-642-17452-0_11
Mendeley helps you to discover research relevant for your work.