Parametric reshaping of humans in videos incorporating motion retargeting

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a system capable of changing the shape of humans in monocular video sequences. Initially, a 3D model is fit over each frame of the video sequence in a spatio-temporally coherent manner, using the feature points provided by the user in a semi-automatic interface and the silhouette correspondences obtained from background subtraction. The 3D morphable model learned from laser scans of different human subjects is used to generate a model having the shape parameters like height, weight, leg length, etc. specified by the user. The deformed model is then retargeted to transfer the semantics of the motion, like step size of the person. This retargeted model is used to perform a body-aware warping of the foreground of each frame. Finally, the warped foreground is composited over the inpainted background. Spatio-temporal consistency is achieved through the combination of automatic pose fitting and body-aware frame warping. Motion retargeting makes the system produce visually pleasing and natural results like the motion of a taller human is higher than that of the human before warping. We have demonstrated the results of shape changes on different subjects with a variety of actions.

Cite

CITATION STYLE

APA

Prakash, S., & Kalra, P. (2018). Parametric reshaping of humans in videos incorporating motion retargeting. In Communications in Computer and Information Science (Vol. 841, pp. 112–125). Springer Verlag. https://doi.org/10.1007/978-981-13-0020-2_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free