This paper presents a method to estimate 3D human pose and body shape from monocular videos. While recent approaches infer the 3D pose from silhouettes and landmarks, we exploit properties of optical flow to temporally constrain the reconstructed motion. We estimate human motion by minimizing the difference between computed flow fields and the output of our novel flow renderer. By just using a single semi-automatic initialization step, we are able to reconstruct monocular sequences without joint annotation. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video.
CITATION STYLE
Alldieck, T., Kassubeck, M., Wandt, B., Rosenhahn, B., & Magnor, M. (2017). Optical flow-based 3D human motion estimation from monocular video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10496 LNCS, pp. 347–360). Springer Verlag. https://doi.org/10.1007/978-3-319-66709-6_28
Mendeley helps you to discover research relevant for your work.