Example-Based Facial Animation of Virtual Reality Avatars Using Auto-Regressive Neural Networks

10Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This article presents a hybrid animation approach that combines example-based and neural animation methods to create a simple, yet powerful animation regime for human faces. Example-based methods usually employ a database of prerecorded sequences that are concatenated or looped in order to synthesize novel animations. In contrast to this traditional example-based approach, we introduce a light-weight auto-regressive network to transform our animation-database into a parametric model. During training, our network learns the dynamics of facial expressions, which enables the replay of annotated sequences from our animation database as well as their seamless concatenation in new order. This representation is especially useful for the synthesis of visual speech, where coarticulation creates interdependencies between adjacent visemes, which affects their appearance. Instead of creating an exhaustive database that contains all viseme variants, we use our animation-network to predict the correct appearance. This allows realistic synthesis of novel facial animation sequences like visual-speech but also general facial expressions in an example-based manner.

Cite

CITATION STYLE

APA

Paier, W., Hilsmann, A., & Eisert, P. (2021). Example-Based Facial Animation of Virtual Reality Avatars Using Auto-Regressive Neural Networks. IEEE Computer Graphics and Applications, 41(4), 52–63. https://doi.org/10.1109/MCG.2021.3068035

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free