Multi-camera tracking of articulated human motion using motion and shape cues

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a framework and algorithm for tracking articulated motion for humans. We use multiple calibrated cameras and an articulated human shape model. Tracking is performed using motion cues as well as image-based cues (such as silhouettes and "motion residues" hereafter referred to as spatial cues,) as opposed to constructing a 3D volume image or visual hulls. Our algorithm consists of a predictor and corrector: the predictor estimates the pose at the t + 1 using motion information between images at t and t + 1. The error in the estimated pose is then corrected using spatial cues from images at t + 1. In our predictor, we use robust multi-scale parametric optimisation to estimate the pixel displacement for each body segment. We then use an iterative procedure to estimate the change in pose from the pixel displacement of points on the individual body segments. We present a method for fusing information from different spatial cues such as silhouettes and "motion residues" into a single energy function. We then express this energy function in terms of the pose parameters, and find the optimum pose for which the energy is minimised. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Sundaresan, A., & Chellappa, R. (2006). Multi-camera tracking of articulated human motion using motion and shape cues. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3852 LNCS, pp. 131–140). https://doi.org/10.1007/11612704_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free