Visual and Tactile Fusion for Estimating the Pose of a Grasped Object

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper considers the problem of fusing vision and touch senses together to estimate the 6D pose of an object while it is grasped. Assuming that a textured 3D model of the object is available, first, Scale-Invariant Feature Transform (SIFT) keypoints of the object are extracted, and a Random sample consensus (RANSAC) method is used to match these features with the textured model. Then, optical flow is used to visually track the object while a grasp is performed. After the hand contacts the object, a tactile-based pose estimation is performed using a Particle Filter. During grasp stabilization and hand movement, the pose of the object is continuously tracked by fusing the visual and tactile estimations with an extended Kalman filter. The main contribution of this work is the continuous use of both sensing modalities to reduce the uncertainty of tactile sensing in those degrees of freedom in which there is no information available, as presented through the experimental validation.

Cite

CITATION STYLE

APA

Álvarez, D., Roa, M. A., & Moreno, L. (2020). Visual and Tactile Fusion for Estimating the Pose of a Grasped Object. In Advances in Intelligent Systems and Computing (Vol. 1093 AISC, pp. 184–198). Springer. https://doi.org/10.1007/978-3-030-36150-1_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free