Modeling the conditional distribution of co-speech upper body gesture jointly using conditional-gan and unrolled-gan

26Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Co-speech gestures are a crucial, non-verbal modality for humans to communicate. Social agents also need this capability to be more human-like and comprehensive. This study aims to model the distribution of gestures conditioned on human speech features. Unlike previous studies that try to find injective functions that map speech to gestures, we propose a novel, conditional GAN-based generative model to not only convert speech into gestures but also to approximate the distribution of gestures conditioned on speech through parameterization. An objective evaluation and user study show that the proposed model outperformed the existing deterministic model, indicating that generative models can approximate real patterns of co-speech gestures better than the existing deterministic model. Our results suggest that it is critical to consider the nature of randomness when modeling co-speech gestures.

Cite

CITATION STYLE

APA

Wu, B., Liu, C., Ishi, C. T., & Ishiguro, H. (2021). Modeling the conditional distribution of co-speech upper body gesture jointly using conditional-gan and unrolled-gan. Electronics (Switzerland), 10(3), 1–15. https://doi.org/10.3390/electronics10030228

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free