Feature encoder guided generative adversarial network for face photo-sketch synthesis

9Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Face photo-sketch synthesis often suffers from many problems, such as low clarity, facial distortion, contents loss, texture missing and color inconsistency in the synthesized images. To alleviate these problems, we propose a feature Encoder Guided Generative Adversarial Network (EGGAN) for face photo-sketch synthesis. We adopt the cycle-consistent generative adversarial network with skipped connections as the general framework, which can train the models for both sketch synthesis and photo synthesis simultaneously. The two generators can constrain each other. In addition, a feature auto-encoder is introduced to refine the synthetic results. The feature encoder is trained to explore a latent space between the photo domain and sketch domains, assuming that there exists a uniform feature representation for a photo-sketch pair. Instead of participating in the generation process, the feature encoder is only utilized to guide the training process. Meanwhile, the feature loss and the feature consistency loss between the fake images and real images from the latent space are calculated to prevent the important identity-specific information from missing and reduce the artifacts in the synthesized images. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on public databases both in terms of perceptual quality and quantitative assessments.

Cite

CITATION STYLE

APA

Zheng, J., Song, W., Wu, Y., Xu, R., & Liu, F. (2019). Feature encoder guided generative adversarial network for face photo-sketch synthesis. IEEE Access, 7, 154971–154985. https://doi.org/10.1109/ACCESS.2019.2949070

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free