Robust self-adaptation fall-detection system based on camera height

18Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

Vision-based fall-detection methods have been previously studied but many have limitations in terms of practicality. Due to differences in rooms, users do not set the camera or sensors at the same height. However, few studies have taken this into consideration. Moreover, some fall-detection methods are lacking in terms of practicality because only standing, sitting and falling are taken into account. Hence, this study constructs a data set consisting of various daily activities and fall events and studies the effect of camera/sensor height on fall-detection accuracy. Each activity in the data set is carried out by eight participants in eight directions and taken with the depth camera at five different heights. Many related studies heavily depended on human segmentation by using Kinect SDK but this is not reliable enough. To address this issue, this study proposes Enhanced Tracking and Denoising Alex-Net (ETDA-Net) to improve tracking and denoising performance and classify fall and non-fall events. Experimental results indicate that fall-detection accuracy is affected by camera height, against which ETDA-Net is robust, outperforming traditional deep learning based fall-detection methods.

Cite

CITATION STYLE

APA

Kong, X., Chen, L., Wang, Z., Chen, Y., Meng, L., & Tomiyama, H. (2019). Robust self-adaptation fall-detection system based on camera height. Sensors (Switzerland), 19(17). https://doi.org/10.3390/s19173768

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free