Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing

10Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual servoing allows to control the motion of a robot using information from its visual sensors to achieve manipulation tasks. In this work we design and implement a robust visual servoing framework for reaching and grasping behaviours for a humanoid service robot with limited control capabilities. Our approach successfully exploits a 5-degrees of freedom manipulator, overcoming the control limitations of the robot while avoiding singularities and stereo vision techniques. Using a single camera, we combine a marker-less model based tracker for the target object, a pattern tracking for the end-effector to deal with the robot’s inaccurate kinematics, and alternate pose based visual servo technique with eye-in-hand and eye-to-hand configurations to achieve a fully functional grasping system. The overall method shows better results for grasping than conventional motion planing and simple inverse kinematics techniques for this robotic morphology, demonstrating a 48.8% of increment in the grasping success rate.

Cite

CITATION STYLE

APA

Ardón, P., Dragone, M., & Erden, M. S. (2018). Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10894 LNCS, pp. 353–365). Springer Verlag. https://doi.org/10.1007/978-3-319-93399-3_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free