Reinforcement learning for zone based multiagent pathfinding under uncertainty

6Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We address the problem of multiple agents finding their paths from respective sources to destination nodes in a graph (also called MAPF). Most existing approaches assume that all agents move at fixed speed, and that a single node accommodates only a single agent. Motivated by the emerging applications of autonomous vehicles such as drone traffic management, we present zone-based path finding (or ZBPF) where agents move among zones, and agents' movements require uncertain travel time. Furthermore, each zone can accommodate multiple agents (as per its capacity). We also develop a simulator for ZBPF which provides a clean interface from the simulation environment to learning algorithms. We develop a novel formulation of the ZBPF problem using difference-of-convex functions (DC) programming. The resulting approach can be used for policy learning using samples from the simulator. We also present a multiagent credit assignment scheme that helps our learning approach converge faster. Empirical results in a number of 2D and 3D instances show that our approach can effectively minimize congestion in zones, while ensuring agents reach their final destinations.

Cite

CITATION STYLE

APA

Ling, J., Gupta, T., & Kumar, A. (2020). Reinforcement learning for zone based multiagent pathfinding under uncertainty. In Proceedings International Conference on Automated Planning and Scheduling, ICAPS (Vol. 30, pp. 551–559). AAAI press. https://doi.org/10.1609/icaps.v30i1.6751

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free