Neural Networks for Robot Eye-Hand Coordination

  • Groen F
  • Kröse B
  • van der Smagt P
  • et al.
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper focuses on static hand-eye coordination. The key issue that will be addressed is the construction of a controller that eliminates the need for calibration. Instead, the system should be self-learning and must be able to adapt itself to changes in the environment. In this application, only positional information in the system will be used; hence the above reference `static.' Three coordinate domains are used to describe the system: the Cartesian world-domain, the vision domain, and the robot domain. The task that is set out to be solved is the following. A robot manipulator has to be positioned directly above a pre-specified target, such that it can be grasped. The target is specified in terms of visual parameters. Only the (x,y,z) position of the end-effector relative to the target is taken into account; this suffices for many pick-and-place problems encountered in industry. (In a number of cases, also the rotation of the hand is of importance, but this rotation can be executed separate from the 3D positioning problem.) Thus the remaining problem is 3 degrees-of-freedom (DoF).

Cite

CITATION STYLE

APA

Groen, F. C. A., Kröse, B. J. A., van der Smagt, P. P., Bartholomeus, M. G. P., & Noest, A. J. (1993). Neural Networks for Robot Eye-Hand Coordination. In ICANN ’93 (pp. 211–218). Springer London. https://doi.org/10.1007/978-1-4471-2063-6_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free