Towards Compact Single Image Super-Resolution via Contrastive Self-distillation

28Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.

Cite

CITATION STYLE

APA

Wang, Y., Lin, S., Qu, Y., Wu, H., Zhang, Z., Xie, Y., & Yao, A. (2021). Towards Compact Single Image Super-Resolution via Contrastive Self-distillation. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1122–1128). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/155

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free