Towards Unsupervised Deformable-Instances Image-to-Image Translation

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Replacing objects in images is a practical functionality of Photoshop, e.g., clothes changing. This task is defined as Unsupervised Deformable-Instances Image-to-Image Translation (UDIT), which maps multiple foreground instances of a source domain to a target domain, involving significant changes in shape. In this paper, we propose an effective pipeline named Mask-Guided Deformable-instances GAN (MGD-GAN) which first generates target masks in batch and then utilizes them to synthesize corresponding instances on the background image, with all instances efficiently translated and background well preserved. To promote the quality of synthesized images and stabilize the training, we design an elegant training procedure which transforms the unsupervised mask-to-instance process into a supervised way by creating paired examples. To objectively evaluate the performance of UDIT task, we design new evaluation metrics which are based on the object detection. Extensive experiments on four datasets demonstrate the significant advantages of our MGD-GAN over existing methods both quantitatively and qualitatively. Furthermore, our training time consumption is hugely reduced compared to the state-of-the-art. The code could be available at https://github.com/sitongsu/MGD GAN.

Cite

CITATION STYLE

APA

Su, S., Song, J., Gao, L., & Zhu, J. (2021). Towards Unsupervised Deformable-Instances Image-to-Image Translation. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1004–1010). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/139

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free