An Overview of Image-to-Image Translation Using Generative Adversarial Networks

8Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image-to-image translation is an important and challenging problem in computer vision. It aims to learn the mapping between two different domains, with applications ranging from data augmentation, style transfer, to super-resolution, etc. With the success of deep learning methods in visual generative tasks, researchers have applied deep generative models, especially generative adversarial networks (GANs), to image-to-image translation since the year of 2016 and gained fruitful progress. In this survey, we have conducted a comprehensive review of the literature in this field, covering supervised and unsupervised methods, among which unsupervised approaches include one-to-one, one-to-many, many-to-many categories and some latest theories. We highlight the innovation aspect of these methods and analyze different models employed and their components. Besides, we summarized some commonly used normalization techniques and evaluation metrics, and finally, present several challenges and future research directions in this area.

Cite

CITATION STYLE

APA

Chen, X., & Jia, C. (2021). An Overview of Image-to-Image Translation Using Generative Adversarial Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12666 LNCS, pp. 366–380). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-68780-9_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free