Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

109Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large text-to-image diffusion models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zero-shot text-guided video-to-video translation framework to adapt image models to videos. The framework includes two parts: key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical cross-frame constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporal-aware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost (without re-training or optimization). The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering high-quality and temporally-coherent videos. Code is available at our project page: https://www.mmlab-ntu.com/project/rerender/

Cite

CITATION STYLE

APA

Yang, S., Zhou, Y., Liu, Z., & Loy, C. C. (2023). Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation. In Proceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023. Association for Computing Machinery, Inc. https://doi.org/10.1145/3610548.3618160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free