Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring

63Citations
Citations of this article
100Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Real-time video deblurring still remains a challenging task due to the complexity of spatially and temporally varying blur itself and the requirement of low computational cost. To improve the network efficiency, we adopt residual dense blocks into RNN cells, so as to efficiently extract the spatial features of the current frame. Furthermore, a global spatio-temporal attention module is proposed to fuse the effective hierarchical features from past and future frames to help better deblur the current frame. For evaluation, we also collect a novel dataset with paired blurry/sharp video clips by using a co-axis beam splitter system. Through experiments on synthetic and realistic datasets, we show that our proposed method can achieve better deblurring performance both quantitatively and qualitatively with less computational cost against state-of-the-art video deblurring methods.

Cite

CITATION STYLE

APA

Zhong, Z., Gao, Y., Zheng, Y., & Zheng, B. (2020). Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12351 LNCS, pp. 191–207). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58539-6_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free