Vision-based guidance and control of robots in projective space

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we propose a method using stereo vision for visually guiding and controlling a robot in projective three-space. Our formulation is entirely projective. Metric models are not required and are replaced with projective models of both the stereo geometry and the robot’s “projective kinematics”. Such models are preferable since they can be identified from the vision data without any a-priori knowledge. More precisely, we present constraints on projective space that reflect the visibility and mobility underlying a given task. Using interaction matrix that relates articulation space to projective space, we decompose the task into three elementary components: a translation and two rotations. This allows us to define trajectories that are both visually and globally feasible, i.e. problems like self-occlusion, local minima, and divergent control no longer exist. In this paper, we will not adopt a straight-foward image-based trajectory tracking. Instead, a directly computed control that combines a feed-forward steering loop with a feed-back control loop, based on the Cartesian error of each of the task’s components.

Cite

CITATION STYLE

APA

Ruf, A., & Horaud, R. (2000). Vision-based guidance and control of robots in projective space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1843, pp. 50–66). Springer Verlag. https://doi.org/10.1007/3-540-45053-x_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free