Pose2Pose

25Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An artist faces two challenges when creating a 2D animated character to mimic a specific human performance. First, the artist must design and draw a collection of artwork depicting portions of the character in a suitable set of poses, for example arm and hand poses that can be selected and combined to express the range of gestures typical for that person. Next, to depict a specific performance, the artist must select and position the appropriate set of artwork at each moment of the animation. This paper presents a system that addresses these challenges by leveraging video of the target human performer. Our system tracks arm and hand poses in an example video of the target. The UI displays clusters of these poses to help artists select representative poses that capture the actor's style and personality. From this mapping of pose data to character artwork, our system can generate an animation from a new performance video. It relies on a dynamic programming algorithm to optimize for smooth animations that match the poses found in the video. Artists used our system to create four 2D characters and were pleased with the final automatically animated results. We also describe additional applications addressing audio-driven or text-based animations.

Cite

CITATION STYLE

APA

Willett, N. S., Shin, H. V., Jin, Z., Li, W., & Finkelstein, A. (2020). Pose2Pose. In International Conference on Intelligent User Interfaces, Proceedings IUI (pp. 88–99). Association for Computing Machinery. https://doi.org/10.1145/3377325.3377505

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free