Learning Speech-driven 3D Conversational Gestures from Video

48Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

We propose the first approach to synthesize the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input. Our algorithm uses a CNN architecture that leverages the inherent correlation between facial expression and hand gestures. Synthesis of conversational body gestures is a multi-modal problem since many similar gestures can plausibly accompany the same input speech. To synthesize plausible body gestures in this setting, we train a Generative Adversarial Network (GAN) based model that measures the plausibility of the generated sequences of 3D body motion when paired with the input audio features. We also contribute a new corpus that contains more than 33 hours of annotated data from in-the-wild videos of talking people. To this end, we apply state-of-the-art monocular approaches for 3D body and hand pose estimation as well as 3D face performance capture to the video corpus. In this way, we can train on orders of magnitude more data than previous algorithms that resort to complex in-studio motion capture solutions, and thereby train more expressive synthesis algorithms. Our experiments and user study show the state-of-the-art quality of our speech-synthesized full 3D character animations.

Cite

CITATION STYLE

APA

Habibie, I., Xu, W., Mehta, D., Liu, L., Seidel, H. P., Pons-Moll, G., … Theobalt, C. (2021). Learning Speech-driven 3D Conversational Gestures from Video. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, IVA 2021 (pp. 101–108). Association for Computing Machinery, Inc. https://doi.org/10.1145/3472306.3478335

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free