Automatic Translation of Music-to-Dance for In-Game Characters

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Music-to-dance translation is an emerging and powerful feature in recent role-playing games. Previous works of this topic consider music-to-dance as a supervised motion generation problem based on time-series data. However, these methods require a large amount of training data pairs and may suffer from the degradation of movements. This paper provides a new solution to this task where we re-formulate the translation as a piece-wise dance phrase retrieval problem based on the choreography theory. With such a design, players are allowed to optionally edit the dance movements on top of our generation while other regression-based methods ignore such user interactivity. Considering that the dance motion capture is expensive that requires the assistance of professional dancers, we train our method under a semi-supervised learning fashion with a large unlabeled music dataset (20x than our labeled one) and also introduce self-supervised pre-training to improve the training stability and generalization performance. Experimental results suggest that our method not only generalizes well over various styles of music but also succeeds in choreography for game players. Our project including the large-scale dataset and supplemental materials is available at https://github.com/FuxiCV/music-to-dance.

Cite

CITATION STYLE

APA

Duan, Y., Shi, T., Hu, Z., Zou, Z., Fan, C., Yuan, Y., & Li, X. (2021). Automatic Translation of Music-to-Dance for In-Game Characters. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2344–2351). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/323

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free