Heuristic planning for decentralized MDPs with sparse interactions

5Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, we explore how local interactions can simplify the process of decision-making in multiagent systems, particularly in multirobot problems. We review a recent decision-theoretic model for multiagent systems, the decentralized sparse-interaction Markov decision process (Dec-SIMDP), that explicitly distinguishes the situations in which the agents in the team must coordinate from those in which they can act independently. We situate this class of problems within different multiagent models, such as MMDPs and transition independent Dec-MDPs. We then contribute a new general approach that leverages the particular structure of Dec-SIMDPs to efficiently plan in this class of problems, and propose two algorithms based on this underlying approach. We pinpoint the main properties of our approach through illustrative examples in multirobot navigation domains with partial observability, and provide empirical comparisons between our algorithms and other existing algorithms for this class of problems. We show that our approach allows the robots to look ahead for possible interactions, planning to accommodate such interactions and thus overcome some of the limitations of previous methods. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Melo, F. S., & Veloso, M. (2012). Heuristic planning for decentralized MDPs with sparse interactions. In Springer Tracts in Advanced Robotics (Vol. 83 STAR, pp. 329–343). https://doi.org/10.1007/978-3-642-32723-0_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free