TiVGAN: Text to Image to Video Generation with Step-by-Step Evolutionary Generator

N/ACitations
Citations of this article
61Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Advances in technology have led to the development of methods that can create desired visual multimedia. In particular, image generation using deep learning has been extensively studied across diverse fields. In comparison, video generation, especially on conditional inputs, remains a challenging and less explored area. To narrow this gap, we aim to train our model to produce a video corresponding to a given text description. We propose a novel training framework, Text-to-Image-to-Video Generative Adversarial Network (TiVGAN), which evolves frame-by-frame and finally produces a full-length video. In the first phase, we focus on creating a high-quality single video frame while learning the relationship between the text and an image. As the steps proceed, our model is trained gradually on more number of consecutive frames. This step-by-step learning process helps stabilize the training and enables the creation of high-resolution video based on conditional text descriptions. Qualitative and quantitative experimental results on various datasets demonstrate the effectiveness of the proposed method.

Cite

CITATION STYLE

APA

Kim, D., Joo, D., & Kim, J. (2020). TiVGAN: Text to Image to Video Generation with Step-by-Step Evolutionary Generator. IEEE Access, 8, 153113–153122. https://doi.org/10.1109/ACCESS.2020.3017881

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free