MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels

12Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

We tackle the problem of generating long-term 3D human motion from multiple action labels. Two main previous approaches, such as action- and motion-conditioned methods, have limitations to solve this problem. The action-conditioned methods generate a sequence of motion from a single action. Hence, it cannot generate long-term motions composed of multiple actions and transitions between actions. Meanwhile, the motion-conditioned methods generate future motions from initial motion. The generated future motions only depend on the past, so they are not controllable by the user’s desired actions. We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a unified recurrent generation system. It repetitively takes the previous motion and action label; then, it generates a smooth transition and the motion of the given action. As a result, MultiAct produces realistic long-term motion controlled by the given sequence of multiple action labels. Code is publicly available in https://github.com/TaeryungLee/MultiAct RELEASE.

Cite

CITATION STYLE

APA

Lee, T., Moon, G., & Lee, K. M. (2023). MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 1231–1239). AAAI Press. https://doi.org/10.1609/aaai.v37i1.25206

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free