A framework for visual servoing

21Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider typical manipulation tasks in terms of a service robot framework. Given a task at hand, such as "Pick up the cup from the dinner table", we present a number of different visual systems required to accomplish the task. A standard robot platform with a PUMA560 on the top is used for experimental evaluation. The classical approach-align-grasp idea is used to design a manipulation system. Here, both visual and tactile feedback is used to accomplish the given task. In terms of image processing, we start by a recognition system which provides a 2D estimate of the object position in the image. Thereafter, a 2D tracking system is presented and used to maintain the object in the field of view during an approach stage. For the alignment stage, two systems are available. The first is a model based tracking system that estimates the complete pose/velocity of the object. The second system is based on corner matching and estimates homography between two images. In terms of tactile feedback, we present a grasping system that, at this stage, performs power grasps. The main objective here is to compensate for minor errors in object position/orientation estimate caused by the vision system. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Kragic, D., & Christensen, H. I. (2003). A framework for visual servoing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2626, pp. 345–354). Springer Verlag. https://doi.org/10.1007/3-540-36592-3_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free