Attention mechanisms in cnn-based single image super-resolution: A brief review and a new perspective

60Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

Abstract

With the advance of deep learning, the performance of single image super-resolution (SR) has been notably improved by convolution neural network (CNN)-based methods. However, the increasing depth of CNNs makes them more difficult to train, which hinders the SR networks from achieving greater success. To overcome this, a wide range of related mechanisms has been intro-duced into the SR networks recently, with the aim of helping them converge more quickly and perform better. This has resulted in many research papers that incorporated a variety of attention mechanisms into the above SR baseline from different perspectives. Thus, this survey focuses on this topic and provides a review of these recently published works by grouping them into three major cate-gories: channel attention, spatial attention, and non-local attention. For each of the groups in the taxonomy, the basic concepts are first explained, and then we delve deep into the detailed insights and contributions. Finally, we conclude this review by highlighting the bottlenecks of the current SR attention mechanisms, and propose a new perspective that can be viewed as a potential way to make a breakthrough.

Cite

CITATION STYLE

APA

Zhu, H., Xie, C., Fei, Y., & Tao, H. (2021, May 2). Attention mechanisms in cnn-based single image super-resolution: A brief review and a new perspective. Electronics (Switzerland). MDPI AG. https://doi.org/10.3390/electronics10101187

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free