Perceptual Conditional Generative Adversarial Networks for End-to-End Image Colourization

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Colours are everywhere. They embody a significant part of human visual perception. In this paper, we explore the paradigm of hallucinating colours from a given gray-scale image. The problem of colourization has been dealt in previous literature but mostly in a supervised manner involving user-interference. With the emergence of Deep Learning methods numerous tasks related to computer vision and pattern recognition have been automatized and carried in an end-to-end fashion due to the availability of large data-sets and high-power computing systems. We investigate and build upon the recent success of Conditional Generative Adversarial Networks (cGANs) for Image-to-Image translations. In addition to using the training scheme in the basic cGAN, we propose an encoder-decoder generator network which utilizes the class-specific cross-entropy loss as well as the perceptual loss in addition to the original objective function of cGAN. We train our model on a large-scale dataset and present illustrative qualitative and quantitative analysis of our results. Our results vividly display the versatility and the proficiency of our methods through life-like colourization outcomes.

Cite

CITATION STYLE

APA

Halder, S. S., De, K., & Roy, P. P. (2019). Perceptual Conditional Generative Adversarial Networks for End-to-End Image Colourization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11362 LNCS, pp. 269–283). Springer Verlag. https://doi.org/10.1007/978-3-030-20890-5_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free