Abstract
Many researchers in computer vision community have been solving challenging problems in human action recognition. Most of the algorithms fail on the limited set of features. The proposed framework for skeleton based action recognition (AR) from the sequences of 3D joint locations. Following the proposed framework, we introduced fusing three different features namely distance, angle and velocity to improve the recognition accuracy. The kernel-based methods are remarkably detecting the RGB and 3D actions. This work explores the potential of the global alignment kernels in skeleton based human action recognition from Microsoft Kinect sensor skeleton data. Accordingly, the distance, angle and velocity features were encoded into global alignment kernels. The recognition is carried out based on the similarity between the query and database features. The framework has been tested on our own 53 class, 5 subject action data named as KLU3D Action, captured using Microsoft Kinect v2 sensor and three other publicly available action datasets NTU RGBD, G3D and UTD MHAD. The performance of our algorithm outperforms when compared to other previous algorithms on the above datasets.
Author supplied keywords
Cite
CITATION STYLE
Sastry, A. S. C. S., Geetesh, S., Sandeep, A., Vitru Varenya, V. S. V. A., Kishore, P. V. V., Anil Kumar, D., … Teja Kiran Kumar, M. (2019). Fusing spatio-temporal joint features for adequate skeleton based action recognition using global alignment kernel. International Journal of Engineering and Advanced Technology, 8(4), 749–754.
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.