Visually Guided UGV for Autonomous Mobile Manipulation in Dynamic and Unstructured GPS-Denied Environments

  • Vohra M
  • Behera L
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A robotic solution for the unmanned ground vehicles (UGVs) to execute the highly complex task of object manipulation in an autonomous mode is presented. This paper primarily focuses on developing an autonomous robotic system capable of assembling elementary blocks to build the large 3D structures in GPS-denied environments. The key contributions of this system paper are i) Designing of a deep learning-based unified multi-task visual perception system for object detection, part-detection, instance segmentation, and tracking, ii) an electromagnetic gripper design for robust grasping, and iii) system integration in which multiple system components are integrated to develop an optimized software stack. The entire mechatronic and algorithmic design of UGV for the above application is detailed in this work. The performance and efficacy of the overall system are reported through several rigorous experiments.

Cite

CITATION STYLE

APA

Vohra, M., & Behera, L. (2023). Visually Guided UGV for Autonomous Mobile Manipulation in Dynamic and Unstructured GPS-Denied Environments (pp. 1–13). https://doi.org/10.1007/978-981-19-2126-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free