In this paper, we investigate a model-free method for robot programming referred to as view-based teaching/playback. It uses neural networks to map factor scores of input images onto robot motions. The method can achieve greater robustness to changes in the task conditions, including the initial pose of the object, as compared to conventional teaching/playback. We devised an online algorithm for adaptively switching between range and grayscale images used in view-based teaching/playback. In its application to pushing tasks using an industrial manipulator, view-based teaching/playback using the proposed algorithm succeeded even under changing lighting conditions. We also devised an algorithm to cope with occlusions using subimages, which worked successfully in experiments.
CITATION STYLE
Maeda, Y., & Saito, Y. (2017). Lighting- and occlusion-robust view-based teaching/playback for model-free robot programming. In Advances in Intelligent Systems and Computing (Vol. 531, pp. 939–952). Springer Verlag. https://doi.org/10.1007/978-3-319-48036-7_68
Mendeley helps you to discover research relevant for your work.