We present a framework and algorithm for tracking articulated motion for humans. We use multiple calibrated cameras and an articulated human shape model. Tracking is performed using motion cues as well as image-based cues (such as silhouettes and "motion residues" hereafter referred to as spatial cues,) as opposed to constructing a 3D volume image or visual hulls. Our algorithm consists of a predictor and corrector: the predictor estimates the pose at the t + 1 using motion information between images at t and t + 1. The error in the estimated pose is then corrected using spatial cues from images at t + 1. In our predictor, we use robust multi-scale parametric optimisation to estimate the pixel displacement for each body segment. We then use an iterative procedure to estimate the change in pose from the pixel displacement of points on the individual body segments. We present a method for fusing information from different spatial cues such as silhouettes and "motion residues" into a single energy function. We then express this energy function in terms of the pose parameters, and find the optimum pose for which the energy is minimised. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Sundaresan, A., & Chellappa, R. (2006). Multi-camera tracking of articulated human motion using motion and shape cues. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3852 LNCS, pp. 131–140). https://doi.org/10.1007/11612704_14
Mendeley helps you to discover research relevant for your work.