In this paper, we propose a viewpoint-aware action recognition method using skeletonbased features from static images. Our method consists of three main steps. First, we categorize the viewpoint from an input static image. Second, we extract 2D/3D joints using state-of-the-art convolutional neural networks and analyze the geometric relationships of the joints for computing 2D and 3D skeleton features. Finally, we perform view-specific action classification per person, based on viewpoint categorization and the extracted 2D and 3D skeleton features. We implement two multi-view data acquisition systems and create a new action recognition dataset containing the viewpoint labels, in order to train and validate our method. The robustness of the proposed method to viewpoint changes was quantitatively confirmed using two multi-view datasets. A real-world application for recognizing various actions was also qualitatively demonstrated.
CITATION STYLE
Kim, S. H., & Cho, D. (2021). Viewpoint-aware action recognition using skeleton-based features from still images. Electronics (Switzerland), 10(9). https://doi.org/10.3390/electronics10091118
Mendeley helps you to discover research relevant for your work.