Attnsense: Multi-level attention mechanism for multimodal human activity recognition

205Citations
Citations of this article
142Readers
Mendeley users who have this article in their library.

Abstract

Sensor-based human activity recognition is a fundamental research problem in ubiquitous computing, which uses the rich sensing data from multimodal embedded sensors such as accelerometer and gyroscope to infer human activities. The existing activity recognition approaches either rely on domain knowledge or fail to address the spatial-temporal dependencies of the sensing signals. In this paper, we propose a novel attention-based multimodal neural network model called AttnSense for multimodal human activity recognition. AttnSense introduce the framework of combining attention mechanism with a convolutional neural network (CNN) and a Gated Recurrent Units (GRU) network to capture the dependencies of sensing signals in both spatial and temporal domains, which shows advantages in prioritized sensor selection and improves the comprehensibility. Extensive experiments based on three public datasets show that AttnSense achieves a competitive performance in activity recognition compared with several state-of-the-art methods.

Cite

CITATION STYLE

APA

Ma, H., Li, W., Zhang, X., Gao, S., & Lu, S. (2019). Attnsense: Multi-level attention mechanism for multimodal human activity recognition. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 3109–3115). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/431

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free