Toward the flexible automation for robot learning from human demonstration using multimodal perception approach

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study proposes a multi-modal perception approach to make a robotic arm perform flexible automation and further simplify the complicated coding process of controlling a robotic arm. The depth camera is utilized for detect face and hand gesture for recognizing operator identification and commands. In addition, the kinematics of the robotic arm associated with the position of manipulated objects can be derived based on the information through human demonstrations and detected objects. In the experiments, the proposed multi-modal perception system can firstly recognize the operator. Then, the operator can demonstrate a task to generate the learning data with the assistance of using gesture. Afterward, the robotic arm can perform the same task as human demonstration. During the process of imitating task, the robotic arm can also be guided by the gesture command of operator.

Cite

CITATION STYLE

APA

Chen, J. H., Lu, G. Y., Chien, Y. H., Chiang, H. H., Wang, W. Y., & Hsu, C. C. (2019). Toward the flexible automation for robot learning from human demonstration using multimodal perception approach. In Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019 (pp. 148–153). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICSSE.2019.8823444

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free