Weight Excitation: Built-in Attention Mechanisms in Convolutional Neural Networks

13Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose novel approaches for simultaneously identifying important weights of a convolutional neural network (ConvNet) and providing more attention to the important weights during training. More formally, we identify two characteristics of a weight, its magnitude and its location, which can be linked with the importance of the weight. By targeting these characteristics of a weight during training, we develop two separate weight excitation (WE) mechanisms via weight reparameterization-based backpropagation modifications. We demonstrate significant improvements over popular baseline ConvNets on multiple computer vision applications using WE (e.g. 1.3% accuracy improvement over ResNet50 baseline on ImageNet image classification, etc.). These improvements come at no extra computational cost or ConvNet structural change during inference. Additionally, including WE methods in a convolution block is straightforward, requiring few lines of extra code. Lastly, WE mechanisms can provide complementary benefits when used with external attention mechanisms such as the popular Squeeze-and-Excitation attention block.

Cite

CITATION STYLE

APA

Quader, N., Bhuiyan, M. M. I., Lu, J., Dai, P., & Li, W. (2020). Weight Excitation: Built-in Attention Mechanisms in Convolutional Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12375 LNCS, pp. 87–103). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58577-8_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free