DGPose: Deep Generative Models for Human Body Analysis

8Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep generative modelling for human body analysis is an emerging problem with many interesting applications. However, the latent space learned by such approaches is typically not interpretable, resulting in less flexibility. In this work, we present deep generative models for human body analysis in which the body pose and the visual appearance are disentangled. Such a disentanglement allows independent manipulation of pose and appearance, and hence enables applications such as pose-transfer without specific training for such a task. Our proposed models, the Conditional-DGPose and the Semi-DGPose, have different characteristics. In the first, body pose labels are taken as conditioners, from a fully-supervised training set. In the second, our structured semi-supervised approach allows for pose estimation to be performed by the model itself and relaxes the need for labelled data. Therefore, the Semi-DGPose aims for the joint understanding and generation of people in images. It is not only capable of mapping images to interpretable latent representations but also able to map these representations back to the image space. We compare our models with relevant baselines, the ClothNet-Body and the Pose Guided Person Generation networks, demonstrating their merits on the Human3.6M, ChictopiaPlus and DeepFashion benchmarks.

Cite

CITATION STYLE

APA

de Bem, R., Ghosh, A., Ajanthan, T., Miksik, O., Boukhayma, A., Siddharth, N., & Torr, P. (2020). DGPose: Deep Generative Models for Human Body Analysis. International Journal of Computer Vision, 128(5), 1537–1563. https://doi.org/10.1007/s11263-020-01306-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free