A method of recognizing human posture in traditional camera images for surveillance application is presented in this paper. Recognition of human posture using a camera has been considered as a cue for modelling human activity in automated surveillance systems. The aim of this study is to analyze the use of joint angles between key body points and machine learning algorithms to classify human posture into three categories; Standing, Sitting, and Lying. Positions of key body points obtained from a deep convolutional neural network were used. The novelty of this approach is in the use of existing traditional cameras without depth sensors. This overcomes the limitations of joint tracking using depth sensors such as Kinect. Distances measured between two key body points, hip and knee, of persons in 2D images were also used for posture recognition. The result shows that 2D information of angles between certain joints can be used to recognize human posture. This approach achieved higher accuracy than simple distance measurement between joints and is computationally efficient. Our approach can be adopted using security cameras and computer hardware already in place.
CITATION STYLE
Arowolo, O. F., Arogunjo, E. O., Owolabi, D. G., & Markus, E. D. (2021). Development of A Human Posture Recognition System for Surveillance Application. International Journal of Computing and Digital Systems, 10(1), 1191–1197. https://doi.org/10.12785/ijcds/1001107
Mendeley helps you to discover research relevant for your work.