This paper proposes one novel on-line gesture segmentation and recognition method for one-shot-learning on depth video. In each depth image, we take several random points from the motion region and select a group of relevant points for each random point. Depth difference between each random point and its relevant points is calculated in Motion History Images. The results are used to generate the random point’s feature. Then we use Random Decision Forest to assign gesture label to each random point and work out the probability distribution vector (PDV) for each frame in the video. Finally, we gain a probability distribution matrix (PDM) using PDVs of sequential frames and do on-line segmentation and recognition for one-shot-learning. Experimental results show our method is competitive to the state-of-the- art methods.
CITATION STYLE
Rong, T., & Yang, R. (2016). One-shot-learning gesture segmentation and recognition using frame-based PDV features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9916 LNCS, pp. 355–365). Springer Verlag. https://doi.org/10.1007/978-3-319-48890-5_35
Mendeley helps you to discover research relevant for your work.