A deep hierarchical reinforcement learner for aerial shepherding of ground swarms

12Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a deep reinforcement learning method to train an autonomous aerial agent acting as a shepherd to provide guidance for a swarm of ground vehicles. The learner is situated within a high-fidelity robotic-operating-system (ROS)-based simulation environment consisting of an Unmanned Aerial Vehicle (UAV) learning to guide a swarm of Unmanned Ground Vehicles (UGVs) to a target location. Our approach uses a combination of machine education, apprenticeship bootstrapping, and deep-learning-based methodologies to decompose the complex shepherding strategy into sub-problems requiring simpler skills that get fused to form the overall skills required for shepherding. The proposed methodology is effective in training the UAV agent with multiple reward designing schemes.

Cite

CITATION STYLE

APA

Nguyen, H. T., Nguyen, T. D., Garratt, M., Kasmarik, K., Anavatti, S., Barlow, M., & Abbass, H. A. (2019). A deep hierarchical reinforcement learner for aerial shepherding of ground swarms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11953 LNCS, pp. 658–669). Springer. https://doi.org/10.1007/978-3-030-36708-4_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free