ClothCap

  • Pons-Moll G
  • Pujades S
  • Hu S
  • et al.
N/ACitations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the minimally clothed body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. ClothCap is able to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes; this provides a step towards virtual try-on.

Cite

CITATION STYLE

APA

Pons-Moll, G., Pujades, S., Hu, S., & Black, M. J. (2017). ClothCap. ACM Transactions on Graphics, 36(4), 1–15. https://doi.org/10.1145/3072959.3073711

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free