Pedestrian Action Prediction Based on Deep Features Extraction of Human Posture and Traffic Scene

8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The paper proposes a solution for pedestrian action prediction from single images. Pedestrian action prediction is based on the analysis of human postures in the context of traffic in traffic systems. Normally, other solutions use sequential frames (video) motion properties. Technically, these solutions may produce high results but slow performance since the need to analyze the relationship between the frames. This paper takes into account analyzing the relationship between the pedestrian postures and traffic scenes from an image with the expectation that ensures accuracy without analyzing the relationship of motion between frames. This work consists of two phases, which are human detection and pedestrian action prediction. First, human detection is solved by applying aggregate channel features (ACF) method and then predict pedestrian action by extracting features of this image and use the classifier model which is trained by features extracted of pedestrian image dataset in convolution neural network (CNN) model. The minimum accuracy rate is 82%, the maximum is 97%, with the average response rate of 0.6 s per pedestrian case has that been identified.

Cite

CITATION STYLE

APA

Tran, D. P., Nhu, N. G., & Hoang, V. D. (2018). Pedestrian Action Prediction Based on Deep Features Extraction of Human Posture and Traffic Scene. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10752 LNAI, pp. 563–572). Springer Verlag. https://doi.org/10.1007/978-3-319-75420-8_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free