Evolvement Constrained Adversarial Learning for Video Style Transfer

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video style transfer is a useful component for applications such as augmented reality, non-photorealistic rendering, and interactive games. Many existing methods use optical flow to preserve the temporal smoothness of the synthesized video. However, the estimation of optical flow is sensitive to occlusions and rapid motions. Thus, in this work, we introduce a novel evolve-sync loss computed by evolvements to replace optical flow. Using this evolve-sync loss, we build an adversarial learning framework, termed as Video Style Transfer Generative Adversarial Network (VST-GAN), which improves upon the MGAN method for image style transfer for more efficient video style transfer. We perform extensive experimental evaluations of our method and show quantitative and qualitative improvements over the state-of-the-art methods.

Cite

CITATION STYLE

APA

Li, W., Wen, L., Bian, X., & Lyu, S. (2019). Evolvement Constrained Adversarial Learning for Video Style Transfer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11361 LNCS, pp. 232–248). Springer Verlag. https://doi.org/10.1007/978-3-030-20887-5_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free