Adversarial Edge-Aware Image Colorization with Semantic Segmentation

18Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It has become a trend in recent years to use deep neural networks for colorization. However, previous methods often encounter problems with edge color leakage and difficulties in obtaining a plausible color output from the Euclidean distance. To solve these problems, we propose a new adversarial edge-aware image colorization method with multitask output combined with semantic segmentation. The system uses a generator with a deep semantic fusion structure to infer semantic clues in a given grayscale image under chroma conditions and learns colorization by simultaneously predicting color information and semantic information. In addition, we also use a specific color difference loss with characteristics of human visual observation that is combined with semantic segmentation loss and adversarial loss for training. The experimental results show that our method is superior to existing methods in terms of different quality metrics and achieves good results in image colorization.

Cite

CITATION STYLE

APA

Kong, G., Tian, H., Duan, X., & Long, H. (2021). Adversarial Edge-Aware Image Colorization with Semantic Segmentation. IEEE Access, 9, 28194–28203. https://doi.org/10.1109/ACCESS.2021.3056144

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free