Plane-based humanoid robot navigation and object model construction for grasping

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work we present an approach to humanoid robot navigation and object model construction for grasping using only RGB-D data from an onboard depth sensor. A plane-based representation is used to provide a high-level model of the workspace, to estimate both the global robot pose and pose with respect to the object, and to determine the object pose as well as its dimensions. A visual feedback is used to achieve the desired robot pose for grasping. In the pre–grasping pose the robot determines the object pose as well as its dimensions. In such a local grasping approach, a simulator with our high-level scene representation and a virtual camera is used to fine-tune the motion controllers as well as to simulate and validate the process of grasping. We present experimental results that were obtained in simulations with virtual camera and robot as well as with real humanoid robot equipped with RGB-D camera, which performed object grasping in low-texture layouts.

Cite

CITATION STYLE

APA

Gritsenko, P., Gritsenko, I., Seidakhmet, A., & Kwolek, B. (2019). Plane-based humanoid robot navigation and object model construction for grasping. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11129 LNCS, pp. 649–664). Springer Verlag. https://doi.org/10.1007/978-3-030-11009-3_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free