This paper presents a method, which enables a robot to extract demonstrator's key motions based on imitation learning through unsegmented human motion. When a robot learns another's motions from unsegmented time series, the robot has to find what he learns from the continuous motion. The learning architecture is developed mainly based on a switching autoregressive model (SARM), a simple phrase extraction method, and singular vector decomposition to discriminate key motions. In most previous research on methods of imitation learning by autonomous robots, target motions that were given to robots were segmented into several meaningful parts by the experimenters in advance. However, to imitate certain behaviors from the continuous motion of a person, the robot needs to find segments that should be learned. In our approach, the learning architecture converts the continuous time series into a discrete time series of letters by using SARM, finds candidates of key motions by using a simple phrase extractor which utilizes n-gram statistics, and removes meaningless segments from the keywords by utilizing singular vector decomposition (SVD) to achieve this goal,. In our experiment, a demonstrator displayed several unsegmented motions to a robot. The results revealed that the framework enabled the robot to obtain several prepared key motions. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Taniguchi, T., & Iwahashi, N. (2009). Imitation learning from unsegmented human motion using switching autoregressive model and singular vector decomposition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5506 LNCS, pp. 953–961). https://doi.org/10.1007/978-3-642-02490-0_116
Mendeley helps you to discover research relevant for your work.