Most of the consensus-based task allocation algorithms assume reliable and unlimited communication between the agents. However, this assumption can be easily violated in real environment with limited bandwidth and message collisions. This paper presents a deep reinforcement learning framework in which agents learn how to schedule and censor themselves amongst the other agents competing for access to a limited communication medium. In particular, the process learns to schedule the communication between agents to improve the performance of task allocation in environments with constrained communication in terms of limited bandwidth and message collision. The proposed approach, called Communication-Aware Consensus-Based Bundle Algorithm (CA-CBBA), extends the previous CBBA that the learned communication policy enables efficient utilization of the shared medium by prioritizing agents with messages that are important for the mission. Furthermore, agents in denser parts of the network are censored appropriately to alleviate the message collision and hidden node problems. We evaluate our approach in various task assignment scenarios, and the results show that CA-CBBA outperforms CBBA in terms of convergence time, rate of conflict resolution, and task allocation reward. Moreover, we show that CA-CBBA yields a policy that generalizes beyond the training set to handle larger team sizes. Finally, the results on time-critical problems, such as a search-and-rescue mission, show that CA-CBBA also outperforms the baselines considered (e.g., CBBA, MCDGA, and ACBBA) in terms of number of unassigned and conflicted tasks in most of the scenarios.
CITATION STYLE
Raja, S., Habibi, G., & How, J. P. (2022). Communication-Aware Consensus-Based Decentralized Task Allocation in Communication Constrained Environments. IEEE Access, 10, 19753–19767. https://doi.org/10.1109/ACCESS.2021.3138857
Mendeley helps you to discover research relevant for your work.