Abstract
Remote monitoring of fall actions or conditions and the everyday lifecycle of disabled losses is the vital drive of current telemedicine. The Internet of Things (IoT) and Artificial Intelligence (AI) models, which incorporate deep learning (DL) and machine learning (ML) techniques, are increasingly applied in healthcare to automate the detection of abnormal and unhealthy conditions. Fall detection (FD) in elderly patients and human action recognition for surveillance are crucial for safety, but achieving high accuracy remains challenging due to complex human movements. Detecting falls is crucial for healthcare and well-being. This paper presents a novel Temporal Convolutional Network-Based Fall Activity Recognition System for Disabled Persons (TCN-FARSDP) technique designed for use in an IoT Environment. The aim is to monitor and detect fall incidents among disabled persons. Initially, the TCN-FARSDP method performs the image pre-processing stage using Gaussian filtering (GF) to eliminate noise and improve the image clarity. Next, the fusion of feature extraction models involves three techniques: NASNetMobile, DenseNet121, and MobileNetV3Large. For the detection of fall activities, the temporal convolutional network (TCN) classifier is employed. Finally, fine-tuning is performed using the Adamax to enhance the convergence and stability of the model. The performance evaluation of the TCN-FARSDP approach is implemented under an FD dataset. The experimental validation of the TCN-FARSDP approach portrayed a superior accuracy value of 99.48% over existing techniques.
Author supplied keywords
Cite
CITATION STYLE
Alzahrani, A., Al-Dayil, R., Alghanim, A. G., & Sharif, M. M. (2026). Artificial Intelligence-based fine-tuning model for fall activity recognition in disabled persons within an IoT environment. Scientific Reports, 16(1). https://doi.org/10.1038/s41598-025-30340-7
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.