Abstract
We introduce the Bayes-Adaptive Interactive Partially Observable Markov Decision Process (BA-IPOMDP), the first multiagent decision model that explicitly incorporates model learning. As in I-POMDPs, the BA-IPOMDP agent maintains beliefs over interactive states, which include the physical states as well as the other agents' models. The BA-IPOMDP assumes that the state transition and observation probabilities are unknown, and augments the interactive states to include these parameters. Beliefs are maintained over this augmented interactive state space. This (necessary) state expansion exacerbates the curse of dimensionality, especially since each I-POMDP belief update is already a recursive procedure (because an agent invokes belief updates from other agents' perspectives as part of its own belief update, in order to anticipate other agents' actions). We extend the interactive particle filter to perform approximate belief update on BA-IPOMDPs. We present our findings on the multiagent Tiger problem.
Cite
CITATION STYLE
Ng, B., Boakye, K., Meyers, C., & Wang, A. (2012). Bayes-Adaptive Interactive POMDPs. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012 (pp. 1408–1414). AAAI Press. https://doi.org/10.1609/aaai.v26i1.8264
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.