Per Garment Capture and Synthesis for Real-time Virtual Try-on

13Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Virtual try-on is a promising application of computer graphics and human computer interaction that can have a profound real-world impact especially during this pandemic. Existing image-based works try to synthesize a try-on image from a single image of a target garment, but it inherently limits the ability to react to possible interactions. It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand. In this paper, we propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images. Our workflow is composed of two parts: garment capturing and clothed person image synthesis. We designed an actuated mannequin and an efficient capturing process that collects the detailed deformations of the target garments under diverse body sizes and poses. Furthermore, we proposed to use a custom-designed measurement garment, and we captured paired images of the measurement garment and the target garments. We then learn a mapping between the measurement garment and the target garments using deep image-to-image translation. The customer can then try on the target garments interactively during online shopping. The proposed workflow requires certain manual labor, but we believe that the cost is acceptable given that the retailers are already paying significant costs for hiring professional photographers and models, stylists, and editors to take photographs for promotion. Our method can remove the need of hiring these costly professionals. We evaluated the effectiveness of the proposed system with ablation studies and quality comparison with previous virtual try-on methods. We perform a user study to show our promising virtual try-on performances. Moreover, we also demonstrate that we use our method for changing virtual costumes in video conferences. Finally, we provide the collected dataset as the cloth dataset parameterized by various viewing angles, body poses, and sizes.

Author supplied keywords

Cite

CITATION STYLE

APA

Chong, T., Shen, I. C., Umetani, N., & Igarashi, T. (2021). Per Garment Capture and Synthesis for Real-time Virtual Try-on. In UIST 2021 - Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (pp. 457–469). Association for Computing Machinery, Inc. https://doi.org/10.1145/3472749.3474762

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free