Abstract
Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations of preserving the identity and sharpness of details, and generating distinct image translations. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image. The disentangled texture is used to transfer facial attributes and expressions before the deformation to target shape and poses. Sharper details and more distinct visual effects are observed in the synthesized faces. In addition, it brings faster convergence during training. In the extensive ablation studies, we also evaluate our method qualitatively and quantitatively on facial attribute and expression synthesis. The results on both the CelebA and RaFD datasets suggest that TDB-GAN achieves better performance.
Author supplied keywords
Cite
CITATION STYLE
Chen, W., Xie, X., Jia, X., & Shen, L. (2019). Texture deformation based generative adversarial networks for multi-domain face editing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11670 LNAI, pp. 257–269). Springer Verlag. https://doi.org/10.1007/978-3-030-29908-8_21
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.