RobotFusion: Grasping with a robotic manipulator via multi-view reconstruction

10Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose a complete system for 3D object reconstruction and grasping based on an articulated robotic manipulator. We deploy an RGB-D sensor as an end effector placed directly on the robotic arm, and process the acquired data to perform multi-view 3D reconstruction and object grasping. We leverage the high repeatability of the robotic arm to estimate 3D camera poses with millimeter accuracy and control each of the six sensor’s DOF in a dexterous workspace. Thereby, we can estimate camera poses directly by robot kinematics and deploy a Truncated Signed Distance Function (TSDF) to accurately fuse multiple views into a unified 3D reconstruction of the scene. Then, we propose an efficient approach to segment the sought objects out of a planar workbench as well as a novel algorithm to automatically estimate grasping points.

Author supplied keywords

Cite

CITATION STYLE

APA

de Gregorio, D., Tombari, F., & Di Stefano, L. (2016). RobotFusion: Grasping with a robotic manipulator via multi-view reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9915 LNCS, pp. 634–647). Springer Verlag. https://doi.org/10.1007/978-3-319-49409-8_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free