Recursive Conditional Generative Adversarial Networks for Video Transformation

6Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Conditional generative adversarial networks (cGANs) are used in various transformation applications, such as super-resolution, colorization, image denoising, and image inpainting. So far, cGANs have been applied to the transformation of still images, but their use could be extended to the transformation of video contents, which has a much larger market. This paper considers problems with the cGAN-based transformation of video contents. The major problem is flickering caused by the discontinuity between adjacent image frames. Several postprocessing algorithms have been proposed to reduce that effect after transformation. We propose a recursive cGAN in which the previous output frame is used as an input in addition to the current input frame to reduce the flickering effect without losing the objective quality of each image. Compared with previous postprocessing algorithms, our approach performed better in terms of various evaluation metrics for video contents.

Cite

CITATION STYLE

APA

Kim, S., & Suh, D. Y. (2019). Recursive Conditional Generative Adversarial Networks for Video Transformation. IEEE Access, 7, 37807–37821. https://doi.org/10.1109/ACCESS.2019.2906472

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free