Integrated Speech and Gesture Synthesis

12Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

Text-to-speech and co-speech gesture synthesis have until now been treated as separate areas by two different research communities, and applications merely stack the two technologies using a simple system-level pipeline. This can lead to modeling inefficiencies and may introduce inconsistencies that limit the achievable naturalness. We propose to instead synthesize the two modalities in a single model, a new problem we call integrated speech and gesture synthesis (ISG). We also propose a set of models modified from state-of-the-art neural speech-synthesis engines to achieve this goal. We evaluate the models in three carefully-designed user studies, two of which evaluate the synthesized speech and gesture in isolation, plus a combined study that evaluates the models like they will be used in real-world applications - speech and gesture presented together. The results show that participants rate one of the proposed integrated synthesis models as being as good as the state-of-the-art pipeline system we compare against, in all three tests. The model is able to achieve this with faster synthesis time and greatly reduced parameter count compared to the pipeline system, illustrating some of the potential benefits of treating speech and gesture synthesis together as a single, unified problem.

Cite

CITATION STYLE

APA

Wang, S., Alexanderson, S., Gustafson, J., Beskow, J., Henter, G. E., & Székely, É. (2021). Integrated Speech and Gesture Synthesis. In ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 177–185). Association for Computing Machinery, Inc. https://doi.org/10.1145/3462244.3479914

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free