Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for motion planning of autonomous robots in uncertain and dynamic environments. They have been successfully applied to various robotic tasks, but a major challenge is to scale up POMDP algorithms for more complex robotic systems. Robotic systems often have mixed observability: even when a robotas state is not fully observable, some components of the state may still be fully observable. Exploiting this, we use a factored model to represent separately the fully and partially observable components of a robotas state and derive a compact lowerdimensional representation of its belief space. We then use this factored representation in conjunction with a point-based algorithm to compute approximate POMDP solutions. Separating fully and partially observable state components using a factored model opens up several opportunities to improve the efficiency of point-based POMDP algorithms. Experiments show that on standard test problems, our new algorithm is many times faster than a leading point-based POMDP algorithm.
CITATION STYLE
Ong, S. C. W., Png, S. W., Hsu, D., & Lee, W. S. (2010). POMDPs for robotic tasks with mixed observability. In Robotics: Science and Systems (Vol. 5, pp. 201–208). MIT Press Journals. https://doi.org/10.7551/mitpress/8727.003.0027
Mendeley helps you to discover research relevant for your work.