AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning

21Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose, and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system, and observation models. Such models are difficult to derive, and generalize across different systems. Moreover, the non-linearities, and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.

Cite

CITATION STYLE

APA

Tallamraju, R., Saini, N., Bonetto, E., Pabst, M., Liu, Y. T., Black, M. J., & Ahmad, A. (2020). AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning. IEEE Robotics and Automation Letters, 5(4), 6678–6685. https://doi.org/10.1109/LRA.2020.3013906

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free