JAFPro: Joint Appearance Fusion and Propagation for Human Video Motion Transfer from Multiple Reference Images

0Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a novel framework for human video motion transfer. Deviating from recent studies that use only single source image, we propose to allow users to supply multiple source images by simply imitating some poses in the desired target video. To aggregate the appearance from multiple input images, we propose a JAFPro framework that incorporates two modules: an appearance fusion module that adaptively fuses the information in the supplied images and an appearance propagation module that propagates textures through flow-based warping to further improve the result. An attractive feature of JAFPro is that the quality of its results progressively improves as more imitating images are supplied. Furthermore, we build a new dataset containing a large variety of dancing videos in the wild. Extensive experiments conducted on this dataset demonstrate JAFPro outperforms state-of-the-art methods both qualitatively and quantitatively. We will release our code and dataset upon publication of this work.

Cite

CITATION STYLE

APA

Yu, X., Liu, H., Han, X., Li, Z., Xiong, Z., & Cui, S. (2020). JAFPro: Joint Appearance Fusion and Propagation for Human Video Motion Transfer from Multiple Reference Images. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 2544–2552). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3414001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free