Low-Cost Multi-Agent Navigation via Reinforcement Learning with Multi-Fidelity Simulator

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In recent years, reinforcement learning (RL) has been widely used to solve multi-agent navigation tasks, and a high-fidelity level for the simulator is critical to narrow the gap between simulation and real-world tasks. However, high-fidelity simulators have high sampling costs and bottleneck the training model-free RL algorithms. Hence, we propose a Multi-Fidelity Simulator framework to train Multi-Agent Reinforcement Learning (MFS-MARL), reducing the total data cost with samples generated by a low-fidelity simulator. We apply the depth-first search to obtain local feasible policies on the low-fidelity simulator as expert policies to help the original reinforcement learning algorithm explore. We built a multi-vehicle simulator with variable fidelity levels to test the proposed method and compared it with the vanilla Soft Actor-Critic (SAC) and expert actor methods. The results show that our method can effectively obtain local feasible policies and can achieve a 23% cost reduction in multi-agent navigation tasks.

Cite

CITATION STYLE

APA

Qiu, J., Yu, C., Liu, W., Yang, T., Yu, J., Wang, Y., & Yang, H. (2021). Low-Cost Multi-Agent Navigation via Reinforcement Learning with Multi-Fidelity Simulator. IEEE Access, 9, 84773–84782. https://doi.org/10.1109/ACCESS.2021.3085328

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free