Video enhancement via super-resolution using deep quality transfer network

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Streaming low bitrate while preserving high-quality video content is a crucial topic in multimedia and video surveillance. In this work, we explore the problem of spatially and temporally reconstructing high-resolution (HR) frames from a high frame-rate low-resolution (LR) sequence and a few temporally subsampled HR frames. The targeted problem is essentially different from the problems handled by typical super-resolution (SR) methods such as single-image SR and video SR, which attempt to reconstruct HR images using only LR images. To tackle the targeted problem, we propose a deep quality transfer network, based on the convolutional neural network (CNN), which consists of modules including generation and selection of HR pixel candidates, fusion with LR input, residual learning and bidirectional architecture. The proposed CNN model has real-time performance at inference stage. The empirical studies have verified the generality of the proposed CNN model showing significant quality gains for video enhancement.

Cite

CITATION STYLE

APA

Hsiao, P. H., & Chang, P. L. (2017). Video enhancement via super-resolution using deep quality transfer network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10113 LNCS, pp. 184–200). Springer Verlag. https://doi.org/10.1007/978-3-319-54187-7_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free