A visually guided robot and a neural network join to grasp slanted objects

  • van der Smagt P
  • Dev A
  • Groen F
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end-effector, should be positioned above a target, with a changing pan and tilt, which is placed against a textured background. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajectory can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials.

Cite

CITATION STYLE

APA

van der Smagt, P., Dev, A., & Groen, F. C. A. (1995). A visually guided robot and a neural network join to grasp slanted objects. In Neural Networks: Artificial Intelligence and Industrial Applications (pp. 121–128). Springer London. https://doi.org/10.1007/978-1-4471-3087-1_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free