Bidirectional temporal-recurrent propagation networks for video super-resolution

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Recently, convolutional neural networks have made a remarkable performance for video super-resolution. However, how to exploit the spatial and temporal information of video efficiently and effectively remains challenging. In this work, we design a bidirectional temporal-recurrent propagation unit. The bidirectional temporal-recurrent propagation unit makes it possible to flow temporal information in an RNN-like manner from frame to frame, which avoids complex motion estimation modeling and motion compensation. To better fuse the information of the two temporal-recurrent propagation units, we use channel attention mechanisms. Additionally, we recommend a progressive up-sampling method instead of one-step up-sampling. We find that progressive up-sampling gets better experimental results than one-stage up-sampling. Extensive experiments show that our algorithm outperforms several recent state-of-the-art video super-resolution (VSR) methods with a smaller model size.

Cite

CITATION STYLE

APA

Han, L., Fan, C., Yang, Y., & Zou, L. (2020). Bidirectional temporal-recurrent propagation networks for video super-resolution. Electronics (Switzerland), 9(12), 1–15. https://doi.org/10.3390/electronics9122085

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free