Grasping the apparent contour

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Two fingered grasps of a priori unknown 3D objects can be achieved effectively using active vision. Real-time contour tracking can be used to localise the silhouette of the object, viewed from a camera mounted with a gripper, on a moving robot arm. Geometric information from analysis of motion around one vantage point is used to guide the robot towards a new vantage point from which the rim (inverse image of the silhouette) admits a more stable grasp. This use of deliberate camera motion to compute the best direction for the robot's subsequent motion, is computationally efficient. Visual processing is concentrated around potential grasp points and costly global reconstruction of an entire surface is avoided. The computation is shown to be robust, both theoretically, owing to a connection with visual parallax, and in computational experiments.

Cite

CITATION STYLE

APA

Taylor, M. J., & Blake, A. (1994). Grasping the apparent contour. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 801 LNCS, pp. 25–34). Springer Verlag. https://doi.org/10.1007/bfb0028332

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free