Image Super-Resolution Using Knowledge Distillation

41Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The significant improvements in image super-resolution (SR) in recent years is majorly resulted from the use of deeper and deeper convolutional neural networks (CNN). However, both computational time and memory consumption simultaneously increase with the utilization of very deep CNN models, posing challenges to deploy SR models in realtime on computationally limited devices. In this work, we propose a novel strategy that uses a teacher-student network to improve the image SR performance. The training of a small but efficient student network is guided by a deep and powerful teacher network. We have evaluated the performance using different ways of knowledge distillation. Through the validations on four datasets, the proposed method significantly improves the SR performance of a student network without changing its structure. This means that the computational time and the memory consumption do not increase during the testing stage while the SR performance is significantly improved.

Cite

CITATION STYLE

APA

Gao, Q., Zhao, Y., Li, G., & Tong, T. (2019). Image Super-Resolution Using Knowledge Distillation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11362 LNCS, pp. 527–541). Springer Verlag. https://doi.org/10.1007/978-3-030-20890-5_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free