Feature recalibration in deep learning via depthwise squeeze and refinement operations

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Feature recalibration is a very effective strategy of further improving performance in deep networks. The commonly used global pooling operation will lose the information of distinguishing features, which requires additional fully connected layers to adjust the relationship between feature maps. In this paper, we propose a novel architectural unit that aims to recalibrate feature maps based on the discriminative feature information contained in each feature map. To achieve this, we use two-layer depthwise convolution instead of global pooling to extract the distinguishing feature information of each feature map. Since convolution can discriminately learn the global information of each feature map, we discard the fully connected layer to ensure independence when adjusting the feature map. After getting the global information of each feature map, the nonlinear function is used directly to implement the refinement of the feature map to enhance the distinguishing feature map and suppress useless ones. Based on the design characteristics of the new unit, we codename it as the 'Depthwise Squeeze and Refinement' (DSR) block. And it can be easily embedded into existing state-of-the-art deep structures to significantly improve the performance at a slight computational cost.

Cite

CITATION STYLE

APA

Zhang, X., & Zhang, X. (2020). Feature recalibration in deep learning via depthwise squeeze and refinement operations. IEEE Access, 8, 79046–79055. https://doi.org/10.1109/ACCESS.2020.2990658

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free