Imaging and fusing time series for wearable sensor-based human activity recognition

262Citations
Citations of this article
177Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To facilitate data-driven and informed decision making, a novel deep neural network architecture for human activity recognition based on multiple sensor data is proposed in this work. Specifically, the proposed architecture encodes the time series of sensor data as images (i.e., encoding one time series into a two-channel image), and leverages these transformed images to retain the necessary features for human activity recognition. In other words, based on imaging time series, wearable sensor-based human activity recognition can be realized by using computer vision techniques for image recognition. In particular, to enable heterogeneous sensor data to be trained cooperatively, a fusion residual network is adopted by fusing two networks and training heterogeneous data with pixel-wise correspondence. Moreover, different layers of deep residual networks are used to deal with dataset size differences. The proposed architecture is then extensively evaluated on two human activity recognition datasets (i.e., HHAR dataset and MHEALTH dataset), which comprise various heterogeneous mobile device sensor combinations (i.e., acceleration, angular velocity, and magnetic field orientation). The findings demonstrate that our proposed approach outperforms other competing approaches, in terms of accuracy rate and F1-value.

Cite

CITATION STYLE

APA

Qin, Z., Zhang, Y., Meng, S., Qin, Z., & Choo, K. K. R. (2020). Imaging and fusing time series for wearable sensor-based human activity recognition. Information Fusion, 53, 80–87. https://doi.org/10.1016/j.inffus.2019.06.014

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free