Decentralized Markov Decision Processes for Handling Temporal and Resource constraints in a Multiple Robot System

  • Beynier A
  • Mouaddib A
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider in this paper a multi-robot planning system where robots realize a common mission with the following characteristics : the mission is an acyclic graph of tasks with dependencies and temporal window validity. Tasks are dis- tributed among robots which have uncertain durations and resource consumptions to achieve tasks. This class of problems can be solved by using decision-theoretic plan- ning techniques that are able to handle local temporal constraints and dependencies between robots allowing them to synchronize their processing. A specific decision model and a value function allow robots to coordinate their actions at runtime to maximize the overall value of the mission realization. For that, we design in this paper a cooperative multi-robot planning system using distributed Markov Decision Processes (MDPs) without communicating. Robots take uncertainty on temporal intervals and dependencies into consideration and use a distributed value function to coordinate the actions of robots.

Cite

CITATION STYLE

APA

Beynier, A., & Mouaddib, A.-I. (2008). Decentralized Markov Decision Processes for Handling Temporal and Resource constraints in a Multiple Robot System. In Distributed Autonomous Robotic Systems 6 (pp. 191–200). Springer Japan. https://doi.org/10.1007/978-4-431-35873-2_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free