Human Action Recognition Using Fusion of Depth and Inertial Sensors

9Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present a human action recognition system that utilizes the fusion of depth and inertial sensor measurements. Robust depth and inertial signal features, that are subject-invariant, are used to train independent Neural Networks, and later decision level fusion is employed using a probabilistic framework in the form of Logarithmic Opinion Pool. The system is evaluated using UTD-Multimodal Human Action Dataset, and we achieve 95% accuracy in 8-fold cross-validation, which is not only higher than using each sensor separately, but is also better than the best accuracy obtained on the mentioned dataset by 3.5%.

Cite

CITATION STYLE

APA

Fuad, Z., & Unel, M. (2018). Human Action Recognition Using Fusion of Depth and Inertial Sensors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10882 LNCS, pp. 373–380). Springer Verlag. https://doi.org/10.1007/978-3-319-93000-8_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free