AvatarReX: Real-Time Expressive Full-body Avatars

68Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data. The learnt avatar not only provides expressive control of the body, hands and the face together, but also supports real-Time animation and rendering. To this end, we propose a compositional avatar representation, where the body, hands and the face are separately modeled in a way that the structural prior from parametric mesh templates is properly utilized without compromising representation flexibility. Furthermore, we disentangle the geometry and appearance for each part. With these technical designs, we propose a dedicated deferred rendering pipeline, which can be executed at a real-Time framerate to synthesize high-quality free-view images. The disentanglement of geometry and appearance also allows us to design a two-pass training strategy that combines volume rendering and surface rendering for network training. In this way, patch-level supervision can be applied to force the network to learn sharp appearance details on the basis of geometry estimation. Overall, our method enables automatic construction of expressive full-body avatars with real-Time rendering capability, and can generate photo-realistic images with dynamic details for novel body motions and facial expressions.

Cite

CITATION STYLE

APA

Zheng, Z., Zhao, X., Zhang, H., Liu, B., & Liu, Y. (2023). AvatarReX: Real-Time Expressive Full-body Avatars. ACM Transactions on Graphics, 42(4). https://doi.org/10.1145/3592101

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free