A deep learning and multimodal ambient sensing framework for human activity recognition

0Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Human Activity Recognition (HAR) is an important area of research in ambient intelligence for various contexts such as ambient-assisted living. The existing HAR approaches are mostly based either on vision, mobile or wearable sensors. In this paper, we propose a hybrid approach for HAR by combining three types of sensing technologies, namely: smartphone accelerometer, RGB cameras and ambient sensors. Acceleration and video streams are analyzed using multiclass Support Vector Machine (SVM) and Convolutional Neural Networks, respectively. Such an analysis is improved with the ambient sensing data to assign semantics to human activities using description logic rules. For integration, we design and implement a Framework to address human activity recognition pipeline from the data collection phase until activity recognition and visualization. The various use cases and performance evaluations of the proposed approach show clearly its utility and efficiency in several everyday scenarios.

Cite

CITATION STYLE

APA

Yachir, A., Amamra, A., Djamaa, B., Zerrouki, A., & Amour, A. K. (2019). A deep learning and multimodal ambient sensing framework for human activity recognition. In Proceedings of the 2019 Federated Conference on Computer Science and Information Systems, FedCSIS 2019 (pp. 101–105). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.15439/2019F50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free