View-invariant human feature extraction for video-surveillance applications

Citations of this article
Mendeley users who have this article in their library.
Get full text


We present a view-invariant human feature extractor (shape+pose) for pedestrian monitoring in man-made environments. Our approach can be divided into 2 steps: firstly, a series of view-based models is built by discretizing the viewpoint with respect to the camera into several training views. During the online stage, the Homography that relates the image points to the closest and most adequate training plane is calculated using the dominant 3D directions. The input image is then warped to this training view and processed using the corresponding view-based model. After model fitting, the inverse transformation is performed on the resulting human features obtaining a segmented silhouette and a 2D pose estimation in the original input image. Experimental results demonstrate our system performs well, independently of the direction of motion, when it is applied to monocular sequences with high perspective effect.




Rogez, G., Guerrero, J. J., & Orrite, C. (2007). View-invariant human feature extraction for video-surveillance applications. In 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2007 Proceedings (pp. 324–329).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free