Implementation of computer vision guided peg-hole insertion task performed by robot through LabVIEW

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a computer vision guided peg hole insertion task conducted by a robot. We mounted two cameras: one capturing top view and the other capturing side view to calibrate the three dimensional co-ordinates of the center position of the peg and hole found in the image to the actual world co-ordinates so that the robot can grab and insert the peg into the hole automatically. We exploit normalized cross correlation based template matching and distortion model (grid) calibration algorithm for our experiment. We exploit a linear equation for the linear and rotational displacement of the arm and gripper of the robot respectively for computing the pulse required for the encoder. We utilize gantry robot to conduct the experiment. The implementation was carried out in LabVIEW environment. We achieved significant amount of accuracy with an experimental error of 5% for template matching and ±2.5 mm for calibration algorithm.

Cite

CITATION STYLE

APA

Sauceda Cienfuegos, A., Rodriguez, E., Romero, J., Ortega Aranda, D., & Saha, B. N. (2017). Implementation of computer vision guided peg-hole insertion task performed by robot through LabVIEW. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10061 LNAI, pp. 437–458). Springer Verlag. https://doi.org/10.1007/978-3-319-62434-1_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free