Spatial-temporal feature fusion for human fall detection

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When suddenly falling to the ground, elderly people can get seriously injured. This paper presents a vision-based fall detection approach by using a low-cost depth camera. The approach is based on a novel combination of three feature types: curvature scale space (CSS), morphological, and temporal features. CSS and morphological features capture different properties of human silhouette during the falling procedure. All the two collected feature vectors are clustered to generate occurrence histogram as fall representations. Meanwhile, the trajectory of a skeleton point that depicts the temporal property of fall action is used as a complimentary representation. For each individual feature, ELM classifier is trained separately for fall prediction. Finally, their prediction scores are fused together to decide whether fall happens or not. For evaluating the approach, we built a depth dataset by capturing 6 daily actions (falling, bending, sitting, squatting, walking, and lying) from 20 subjects. Extensive experiments show that the proposed approach achieves an average 85.89% fall detection accuracy, which apparently outperforms using each feature type individually.

Cite

CITATION STYLE

APA

Ma, X., Wang, H., Xue, B., & Li, Y. (2015). Spatial-temporal feature fusion for human fall detection. In Communications in Computer and Information Science (Vol. 546, pp. 438–447). Springer Verlag. https://doi.org/10.1007/978-3-662-48558-3_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free