On Open-Set, High-Fidelity and Identity-Specific Face Transformation

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, a Generative Adversarial Networks-based framework has been proposed for identity-specific face transformation with high fidelity in open domains. Specifically, for any face, the pro-posed framework can transform its identity to the target identity, while preserving attributes and details (e.g., pose, gender, age, facial expression, skin tone, illumination and background). To this end, an auto-encoder network is adopted to learn the transformation mapping, which encodes the source image into the latent representation, and reconstruct it with the target identity. In addition, the face parsing pyramid is introduced to help the decoder restore the attributes. Moreover, a novel perceptual constraint is applied to the transformed images to guarantee the correct change of the desired identity and to help retrieve the lost details during face identity transformation. Extensive experiments and comparisons to several open-source approaches demonstrate the efficacy of the proposed framework: it can achieve more realistic identity transformation while better preserving attributes and details.

Cite

CITATION STYLE

APA

Zhang, L., Pan, X., Yang, H., & Li, L. (2020). On Open-Set, High-Fidelity and Identity-Specific Face Transformation. IEEE Access, 8, 224643–224653. https://doi.org/10.1109/ACCESS.2020.3044187

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free