Consistent video style transfer via compound regularization

35Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Recently, neural style transfer has drawn many attentions and significant progresses have been made, especially for image style transfer. However, flexible and consistent style transfer for videos remains a challenging problem. Existing training strategies, either using a significant amount of video data with optical flows or introducing single-frame regularizers, have limited performance on real videos. In this paper, we propose a novel interpretation of temporal consistency, based on which we analyze the drawbacks of existing training strategies; and then derive a new compound regularization. Experimental results show that the proposed regularization can better balance the spatial and temporal performance, which supports our modeling. Combining with the new cost formula, we design a zero-shot video style transfer framework. Moreover, for better feature migration, we introduce a new module to dynamically adjust inter-channel distributions. Quantitative and qualitative results demonstrate the superiority of our method over other state-of-the-art style transfer methods. Our project is publicly available at: https://daooshee.github.io/CompoundVST/.

Cite

CITATION STYLE

APA

Wang, W., Xu, J., Zhang, L., Wang, Y., & Liu, J. (2020). Consistent video style transfer via compound regularization. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 12233–12240). AAAI press. https://doi.org/10.1609/aaai.v34i07.6905

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free