A Temporally-Aware Interpolation Network for Video Frame Inpainting

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose the first deep learning solution to video frame inpainting, a more challenging but less ambiguous task than related problems such as general video inpainting, frame interpolation, and video prediction. We devise a pipeline composed of two modules: a bidirectional video prediction module and a temporally-aware frame interpolation module. The prediction module makes two intermediate predictions of the missing frames, each conditioned on the preceding and following frames respectively, using a shared convolutional LSTM-based encoder-decoder. The interpolation module blends the intermediate predictions, using time information and hidden activations from the video prediction module to resolve disagreements between the predictions. Our experiments demonstrate that our approach produces more accurate and qualitatively satisfying results than a state-of-the-art video prediction method and many strong frame inpainting baselines. Our code is available at https://github.com/sunxm2357/TAI_video_frame_inpainting.

Cite

CITATION STYLE

APA

Sun, X., Szeto, R., & Corso, J. J. (2019). A Temporally-Aware Interpolation Network for Video Frame Inpainting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11363 LNCS, pp. 249–264). Springer Verlag. https://doi.org/10.1007/978-3-030-20893-6_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free