Texture deformation based generative adversarial networks for multi-domain face editing

3Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations of preserving the identity and sharpness of details, and generating distinct image translations. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image. The disentangled texture is used to transfer facial attributes and expressions before the deformation to target shape and poses. Sharper details and more distinct visual effects are observed in the synthesized faces. In addition, it brings faster convergence during training. In the extensive ablation studies, we also evaluate our method qualitatively and quantitatively on facial attribute and expression synthesis. The results on both the CelebA and RaFD datasets suggest that TDB-GAN achieves better performance.

Cite

CITATION STYLE

APA

Chen, W., Xie, X., Jia, X., & Shen, L. (2019). Texture deformation based generative adversarial networks for multi-domain face editing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11670 LNAI, pp. 257–269). Springer Verlag. https://doi.org/10.1007/978-3-030-29908-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free