Recognition of human continuous action with 3D CNN

7Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Under the boom of the service robot, the human continuous action recognition becomes an indispensable research. In this paper, we propose a continuous action recognition method based on multi-channel 3D CNN for extracting multiple features, which are classified with KNN. First, we use fragmentary action as training samples which can be identified in the process of action. Then the training samples are processed through the gray scale, improved L-K optical flow and Gabor filter, to extract the characteristics of diversification using a priori knowledge. Then the 3D CNN is constructed to process multi-channel features that are formed into 128-dimension feature maps. Finally, we use KNN to classify those samples. We find that the fragmentary action in continuous action of the identification showed a good robustness. And the proposed method is verified in HMDB-51 and UCF-101 to be more accurate than Gaussian Bayes or the single 3D CNN in action recognition.

Cite

CITATION STYLE

APA

Yu, G., & Li, T. (2017). Recognition of human continuous action with 3D CNN. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10528 LNCS, pp. 314–322). Springer Verlag. https://doi.org/10.1007/978-3-319-68345-4_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free