Human Activity Recognition from Multiple Sensors Data Using Multi-fusion Representations and CNNs

46Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the emerging interest in the ubiquitous sensing field, it has become possible to build assistive technologies for persons during their daily life activities to provide personalized feedback and services. For instance, it is possible to detect an individual's behavioral pattern (e.g., physical activity, location, and mood) by using sensors embedded in smart-watches and smartphones. The multi-sensor environments also come with some challenges, such as how to fuse and combine different sources of data. In this article, we explore several methods of fusion for multi-representations of data from sensors. Furthermore, multiple representations of sensor data were generated and then fused using data-level, feature-level, and decision-level fusions. The presented methods were evaluated using three publicly available human activity recognition (HAR) datasets. The presented approaches utilize Deep Convolutional Neural Networks (CNNs). A generic architecture for fusion of different sensors is proposed. The proposed method shows promising performance, with the best results reaching an overall accuracy of 98.4% for the Context-Awareness via Wrist-Worn Motion Sensors (HANDY) dataset and 98.7% for the Wireless Sensor Data Mining (WISDM version 1.1) dataset. Both results outperform previous approaches.

Cite

CITATION STYLE

APA

Noori, F. M., Riegler, M., Uddin, M. Z., & Torresen, J. (2020). Human Activity Recognition from Multiple Sensors Data Using Multi-fusion Representations and CNNs. ACM Transactions on Multimedia Computing, Communications and Applications, 16(2). https://doi.org/10.1145/3377882

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free