Depth-varying human video sprite synthesis

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video texture is an appealing method to extract and replay natural human motion from video shots. There have been much research on video texture analysis, generation and interactive control. However, the video sprites created by existing methods are typically restricted to constant depths, so that the motion diversity is strongly limited. In this paper, we propose a novel depth-varying human video sprite synthesis method, which significantly increases the degrees of freedom of human video sprite. A novel image distance function encoding scale variation is proposed, which can effectively measure the human snapshots with different depths/scales and poses, so that aligning similar poses with different depths is possible. The transitions among non-consecutive frames are modeled as a 2D transformation matrix, which can effectively avoid drifting without leveraging markers or user intervention. The synthesized depth-varying human video sprites can be seamlessly inserted into new scenes for realistic video composition. A variety of challenging examples demonstrate the effectiveness of our method. © 2012 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Hua, W., Yang, W., Dong, Z., & Zhang, G. (2012). Depth-varying human video sprite synthesis. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7145 LNCS, 34–47. https://doi.org/10.1007/978-3-642-29050-3_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free