Area-Specific Convolutional Neural Networks for Single Image Super-Resolution

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The implementation of deep convolutional neural networks (CNN) in single image super-resolution (SISR) has been successful at improving restoration quality. However, due to analysis made in previous works, it is observed that missing details in low-resolution (LR) images mostly exist in high-frequency regions. Since CNN operates all regions of the low-resolution (LR) image equally, the computation redundancy is observed in the low-frequency area. We generate a gradient-based binary mask (decision mask) to discriminate the high-frequency areas from the low-frequency areas and apply two kinds of convolution to them separately. We propose an Area-Specific CNN (ASCNN) for super-resolution. It consists of high parameter convolutions and low parameter convolutions to process the high-frequency areas and low-frequency areas separately, which efficiently reduces the FLOPs (floating-point operation) while maintaining restoration quality. The settings for reduction are configurable and experimental results show that ASCNN achieves state-of-the-art performance with FLOPs reduction up to 40.1%/37.0%/34.0% for × 2/× 3/× 4 scale factors.

Cite

CITATION STYLE

APA

Alao, H., Kim, T. S., & Lee, K. (2022). Area-Specific Convolutional Neural Networks for Single Image Super-Resolution. IEEE Access, 10, 104567–104576. https://doi.org/10.1109/ACCESS.2022.3210694

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free