AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections

4Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Previous animatable 3D-aware GANs for human generation have primarily focused on either the human head or full body. However, head-only videos are relatively uncommon in real life, and full body generation typically does not deal with facial expression control and still has challenges in generating high-quality results. Towards applicable video avatars, we present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements. It is a generative model trained on unstructured 2D image collections without using 3D or video data. For the new task, we base our method on the generative radiance manifold representation and equip it with learnable facial and head-shoulder deformations. A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces, which is critical for portrait images. A pose deformation processing network is developed to generate plausible deformations for challenging regions such as long hair. Experiments show that our method, trained on unstructured 2D images, can generate diverse and high-quality 3D portraits with desired control over different properties.

Cite

CITATION STYLE

APA

Wu, Y., Xu, S., Xiang, J., Wei, F., Chen, Q., Yang, J., & Tong, X. (2023). AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections. In Proceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023. Association for Computing Machinery, Inc. https://doi.org/10.1145/3610548.3618164

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free