Contact and Human Dynamics from Monocular Video

24Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles. In this paper, we present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input. We first estimate ground contact timings with a novel prediction network which is trained without hand-labeled data. A physics-based trajectory optimization then solves for a physically-plausible motion, based on the inputs. We show this process produces motions that are significantly more realistic than those from purely kinematic methods, substantially improving quantitative measures of both kinematic and dynamic plausibility. We demonstrate our method on character animation and pose estimation tasks on dynamic motions of dancing and sports with complex contact patterns.

Cite

CITATION STYLE

APA

Rempe, D., Guibas, L. J., Hertzmann, A., Russell, B., Villegas, R., & Yang, J. (2020). Contact and Human Dynamics from Monocular Video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12350 LNCS, pp. 71–87). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58558-7_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free