Abstract
We propose a new framework for gesture generation, aiming to allow data-driven approaches to produce more semantically rich gestures. Our approach first predicts whether to gesture, followed by a prediction of the gesture properties. Those properties are then used as conditioning for a modern probabilistic gesture-generation model capable of high-quality output. This empowers the approach to generate gestures that are both diverse and representational. Follow-ups and more information can be found on the project page: https://svito-zar.github.io/speech2properties2gestures/
Author supplied keywords
Cite
CITATION STYLE
Kucherenko, T., Nagy, R., Jonell, P., Neff, M., Kjellström, H., & Henter, G. E. (2021). Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, IVA 2021 (pp. 145–147). Association for Computing Machinery, Inc. https://doi.org/10.1145/3472306.3478333
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.