Estimating human dynamics on-The-fly using monocular video for pose estimation

1Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Human pose estimation using uncalibrated monocular visual inputs alone is a challenging problem for both the computer vision and robotics communities. From the robotics perspective, the challenge here is one of pose estimation of a multiply-Articulated system of bodies using a single nonspecialized environmental sensor (the camera) and thereby, creating low-order surrogate computational models for analysis and control. In this work, we propose a technique for estimating the lowerlimb dynamics of a human solely based on captured behavior using an uncalibrated monocular video camera. We leverage our previously developed framework for human pose estimation to (i) deduce the correct sequence of temporally coherent gap-filled pose estimates, (ii) estimate physical parameters, employing a dynamics model incorporating the anthropometric constraints, and (iii) filter out the optimized gap-filled pose estimates, using an Unscented Kalman Filter (UKF) with the estimated dynamicallyequivalent human dynamics model. We test the framework on videos from the publicly available DARPA Mind's Eye Year 1 corpus [8]. The combined estimation and filtering framework not only results in more accurate physically plausible pose estimates, but also provides pose estimates for frames, where the original human pose estimation framework failed to provide one.

Cite

CITATION STYLE

APA

Agarwal, P., Kumar, S., Ryde, J., Corso, J. J., & Krovi, V. N. (2013). Estimating human dynamics on-The-fly using monocular video for pose estimation. In Robotics: Science and Systems (Vol. 8, pp. 1–8). MIT Press Journals. https://doi.org/10.15607/rss.2012.viii.001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free