Visual servoing allows to control the motion of a robot using information from its visual sensors to achieve manipulation tasks. In this work we design and implement a robust visual servoing framework for reaching and grasping behaviours for a humanoid service robot with limited control capabilities. Our approach successfully exploits a 5-degrees of freedom manipulator, overcoming the control limitations of the robot while avoiding singularities and stereo vision techniques. Using a single camera, we combine a marker-less model based tracker for the target object, a pattern tracking for the end-effector to deal with the robot’s inaccurate kinematics, and alternate pose based visual servo technique with eye-in-hand and eye-to-hand configurations to achieve a fully functional grasping system. The overall method shows better results for grasping than conventional motion planing and simple inverse kinematics techniques for this robotic morphology, demonstrating a 48.8% of increment in the grasping success rate.
CITATION STYLE
Ardón, P., Dragone, M., & Erden, M. S. (2018). Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10894 LNCS, pp. 353–365). Springer Verlag. https://doi.org/10.1007/978-3-319-93399-3_31
Mendeley helps you to discover research relevant for your work.