One-shot-learning gesture segmentation and recognition using frame-based PDV features

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes one novel on-line gesture segmentation and recognition method for one-shot-learning on depth video. In each depth image, we take several random points from the motion region and select a group of relevant points for each random point. Depth difference between each random point and its relevant points is calculated in Motion History Images. The results are used to generate the random point’s feature. Then we use Random Decision Forest to assign gesture label to each random point and work out the probability distribution vector (PDV) for each frame in the video. Finally, we gain a probability distribution matrix (PDM) using PDVs of sequential frames and do on-line segmentation and recognition for one-shot-learning. Experimental results show our method is competitive to the state-of-the- art methods.

Cite

CITATION STYLE

APA

Rong, T., & Yang, R. (2016). One-shot-learning gesture segmentation and recognition using frame-based PDV features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9916 LNCS, pp. 355–365). Springer Verlag. https://doi.org/10.1007/978-3-319-48890-5_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free