Video Super Resolution via Deep Global-Aware Network

10Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Video super-resolution aims to increase the resolution of videos by exploiting the intra-frame and inter-frame dependencies of the low-resolution video sequences. There are usually two dependent steps in the video super-resolution: the motion compensation and the super-resolution reconstruction. In this paper, we propose a new deep learning framework without the explicit motion estimation by utilizing the self-attention model to exploit the full receptive field of the input video frames. In other words, the proposed deep neural network extracts the local features at all spatial-temporal locations for combining into global features using the self-attention networks in order to reconstruct the high-resolution video frame. The proposed global-aware network outperforms the state-of-the-art deep learning-based image and video super-resolution algorithms in terms of subjective and objective quality with less computational operations, as verified by extensive experiments on public image and video datasets, including Set5, Set14, B100, Urban100, and Vid4.

Cite

CITATION STYLE

APA

Hung, K. W., Qiu, C., & Jiang, J. (2019). Video Super Resolution via Deep Global-Aware Network. IEEE Access, 7, 74711–74720. https://doi.org/10.1109/ACCESS.2019.2920774

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free