Viewpoint-aware action recognition using skeleton-based features from still images

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a viewpoint-aware action recognition method using skeletonbased features from static images. Our method consists of three main steps. First, we categorize the viewpoint from an input static image. Second, we extract 2D/3D joints using state-of-the-art convolutional neural networks and analyze the geometric relationships of the joints for computing 2D and 3D skeleton features. Finally, we perform view-specific action classification per person, based on viewpoint categorization and the extracted 2D and 3D skeleton features. We implement two multi-view data acquisition systems and create a new action recognition dataset containing the viewpoint labels, in order to train and validate our method. The robustness of the proposed method to viewpoint changes was quantitatively confirmed using two multi-view datasets. A real-world application for recognizing various actions was also qualitatively demonstrated.

Cite

CITATION STYLE

APA

Kim, S. H., & Cho, D. (2021). Viewpoint-aware action recognition using skeleton-based features from still images. Electronics (Switzerland), 10(9). https://doi.org/10.3390/electronics10091118

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free