Streaming low bitrate while preserving high-quality video content is a crucial topic in multimedia and video surveillance. In this work, we explore the problem of spatially and temporally reconstructing high-resolution (HR) frames from a high frame-rate low-resolution (LR) sequence and a few temporally subsampled HR frames. The targeted problem is essentially different from the problems handled by typical super-resolution (SR) methods such as single-image SR and video SR, which attempt to reconstruct HR images using only LR images. To tackle the targeted problem, we propose a deep quality transfer network, based on the convolutional neural network (CNN), which consists of modules including generation and selection of HR pixel candidates, fusion with LR input, residual learning and bidirectional architecture. The proposed CNN model has real-time performance at inference stage. The empirical studies have verified the generality of the proposed CNN model showing significant quality gains for video enhancement.
CITATION STYLE
Hsiao, P. H., & Chang, P. L. (2017). Video enhancement via super-resolution using deep quality transfer network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10113 LNCS, pp. 184–200). Springer Verlag. https://doi.org/10.1007/978-3-319-54187-7_13
Mendeley helps you to discover research relevant for your work.