A comparative analysis on sensor-based human activity recognition using various deep learning techniques

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To forecast conditions of action or actions during physical activity, the issue of classifying body gestures and reactions is referred to as human activity recognition (HAR). As the main technique to determine the range of motion, speed, velocity, and magnetic field orientation during these physical exercises, inertial measurement units (IMUs) prevail. Inertial sensors on the body can be used to produce signals tracking body motion and vital signs that can develop models effi-ciently and identify physical activity correctly. Extreme gradient boosting, multi-layer perceptron, convolutional neural network, and long short-term memory network methods are contrasted in this paper to distinguish human behaviors on the HEALTH datasets. The efficiency of machine learning models is often compared to studies that better fit the multisensory fusion analysis paradigm. The experimental findings of this article on the MHEALTH dataset are strongly promising and reliably outperform current baseline models, comprising of 12 physical activities obtained from four separate inertial sensors. The best efficiency metrics were obtained by MLP and XGBoost with accuracy (92.85%, 90.97%), precision (94.66%, 92.09%), recall (91.59%, 89.99%), and F1-score (92.7%, 90.78%), respectively.

Cite

CITATION STYLE

APA

Indumathi, V., & Prabakeran, S. (2021). A comparative analysis on sensor-based human activity recognition using various deep learning techniques. Lecture Notes on Data Engineering and Communications Technologies, 66, 919–938. https://doi.org/10.1007/978-981-16-0965-7_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free