Vision-Based Self-Assembly for Modular Multirotor Structures

8Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modular aerial robots can adapt their shape to suit a wide range of tasks, but developing efficient self-reconfiguration algorithms is still a challenge. Self-reconfiguration algorithms in the literature rely on high-accuracy global positioning systems which are not realistic for real-world applications. In this letter, we study self-reconfiguration algorithms using a combination of low-accuracy global positioning systems (e.g., GPS) and on-board relative positioning (e.g. visual sensing) for precise docking actions. We present three algorithms: 1) parallelized self-assembly sequencing that minimizes the number of serial 'docking steps'; 2) parallelized self-assembly sequencing that minimizes total distance traveled by modules; and 3) parallelized self-reconfiguration that breaks an initial structure down as little as possible before assembling a new structure. The algorithms take into account the constraints of the local sensors and use heuristics to provide a computationally efficient solution for the combinatorial problem. Our evaluation in 2-D and 3-D simulations show that the algorithms scale with the number of modules and structure shape.

Cite

CITATION STYLE

APA

Litman, Y., Gandhi, N., Phan, L. T. X., & Saldana, D. (2021). Vision-Based Self-Assembly for Modular Multirotor Structures. IEEE Robotics and Automation Letters, 6(2), 2202–2208. https://doi.org/10.1109/LRA.2021.3061380

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free