GANimator:Neural Motion Synthesis from a Single Sequence

80Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence. GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements. Existing data-driven techniques for motion synthesis require a large motion dataset which contains the desired and specific skeletal structure. By contrast, GANimator only requires training on a single motion sequence, enabling novel motion synthesis for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more. Our framework contains a series of generative and adversarial neural networks, each responsible for generating motions in a specific frame rate. The framework progressively learns to synthesize motion from random noise, enabling hierarchical control over the generated motion content across varying levels of detail. We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence. Code and data for this paper are at https://peizhuoli.github.io/ganimator.

Cite

CITATION STYLE

APA

Li, P., Aberman, K., Zhang, Z., Hanocka, R., & Sorkine-Hornung, O. (2022). GANimator:Neural Motion Synthesis from a Single Sequence. ACM Transactions on Graphics, 41(4). https://doi.org/10.1145/3528223.3530157

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free