View Enhanced Jigsaw Puzzle for Self-Supervised Feature Learning in 3D Human Action Recognition

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Self-supervised learning methods have received much attention in skeleton-based human action recognition. These methods rely on pretext tasks to utilize unlabeled data and learn an effective feature encoder. In this paper, a novel self-supervised learning method is proposed. First, we design a new pretext task called view enhanced jigsaw puzzle (VEJP) to improve the learning difficulty of the encoder. The VEJP introduces multi-view information into the jigsaw puzzle, thus forcing the encoder to learn view-independent high-level features of human skeletons. Based on the encoder trained by VEJP, we propose the view pooling encoder (VPE) to integrate the information of multiple views with the pooling mechanism, and the features extracted by VPE are more robust and distinguishable. In addition, by adjusting the difficulty of VEJP, the influence of the pretext task difficulty on the downstream task performance is studied, and the experimental results show that the pretext tasks should be moderately difficult to achieve effective feature learning. Our method achieves competitive results on representative benchmark datasets. It provides a strong baseline for the jigsaw puzzle task and shows advantages in situations where the number of labeled data is minimal.

Cite

CITATION STYLE

APA

You, W., & Wang, X. (2022). View Enhanced Jigsaw Puzzle for Self-Supervised Feature Learning in 3D Human Action Recognition. IEEE Access, 10, 36385–36396. https://doi.org/10.1109/ACCESS.2022.3165040

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free