When parts of the states in a goal POMDP are fully observable and some actions are deterministic it is possible to take advantage of these properties to efficiently generate approximate solutions. Actions that deterministically affect the fully observable component of the world state can be abstracted away and combined into macro actions, permitting a planner to converge more quickly. This processing can be separated from the main search procedure, allowing us to leverage existing POMDP solvers. Theoretical results show how a POMDP can be analyzed to identify the exploitable properties and formal guarantees are provided showing that the use of macro actions preserves solvability. The efficiency of the method is demonstrated with examples when used in combination with existing POMDP solvers. Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved.
CITATION STYLE
Warnquist, H., Kvarnström, J., & Doherty, P. (2013). Exploiting fully observable and deterministic structures in goal POMDPs. In ICAPS 2013 - Proceedings of the 23rd International Conference on Automated Planning and Scheduling (pp. 242–250). https://doi.org/10.1609/icaps.v23i1.13554
Mendeley helps you to discover research relevant for your work.