In this paper we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end-effector, should be positioned above a target, with a changing pan and tilt, which is placed against a textured background. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajectory can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials.
CITATION STYLE
van der Smagt, P., Dev, A., & Groen, F. C. A. (1995). A visually guided robot and a neural network join to grasp slanted objects. In Neural Networks: Artificial Intelligence and Industrial Applications (pp. 121–128). Springer London. https://doi.org/10.1007/978-1-4471-3087-1_25
Mendeley helps you to discover research relevant for your work.