An Image-to-video Model for Real-Time Video Enhancement

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Recent years have witnessed the increasing popularity of learning-based methods to enhance the color and tone of images. Although these methods achieve satisfying performance on static images, it is non-trivial to extend such image-to-image methods to handle videos. A straight extension would easily lead to computation inefficiency or distracting flickering effects. In this paper, we propose a novel image-to-video model enforcing the temporal stability for real-time video enhancement, which is trained using only static images. Specifically, we first propose a lightweight image enhancer via learnable flexible 2-dimensional lookup tables (F2D LUTs), which can consider scenario information adaptively. To impose temporal constancy, we further propose to infer the motion fields via a virtual camera motion engine, which can be utilized to stabilize the image-to-video model with temporal consistency loss. Experimental results show that our image-to-video model not only achieves the state-of-the-art performance on the image enhancement task, but also performs favorably against baselines on the video enhancement task. Our source code is available at https://github.com/shedy-pub/I2VEnhance.

Cite

CITATION STYLE

APA

She, D., & Xu, K. (2022). An Image-to-video Model for Real-Time Video Enhancement. In MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia (pp. 1837–1846). Association for Computing Machinery, Inc. https://doi.org/10.1145/3503161.3548325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free