Lighting- and occlusion-robust view-based teaching/playback for model-free robot programming

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we investigate a model-free method for robot programming referred to as view-based teaching/playback. It uses neural networks to map factor scores of input images onto robot motions. The method can achieve greater robustness to changes in the task conditions, including the initial pose of the object, as compared to conventional teaching/playback. We devised an online algorithm for adaptively switching between range and grayscale images used in view-based teaching/playback. In its application to pushing tasks using an industrial manipulator, view-based teaching/playback using the proposed algorithm succeeded even under changing lighting conditions. We also devised an algorithm to cope with occlusions using subimages, which worked successfully in experiments.

Cite

CITATION STYLE

APA

Maeda, Y., & Saito, Y. (2017). Lighting- and occlusion-robust view-based teaching/playback for model-free robot programming. In Advances in Intelligent Systems and Computing (Vol. 531, pp. 939–952). Springer Verlag. https://doi.org/10.1007/978-3-319-48036-7_68

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free