Skip connections in deep networks have improved both segmentation and classification performance by facilitating the training of deeper network architectures and reducing the risks for vanishing gradients. The skip connections equip encoder-decoder like networks with richer feature representations, but at the cost of higher memory usage, computation, and possibly resulting in transferring non-discriminative feature maps. In this paper, we focus on improving the skip connections used in segmentation networks. We propose light, learnable skip connections which learn to first select the most discriminative channels, and then aggregate the selected ones as single channel attending to the most discriminative regions of input. We evaluate the proposed method on 3 different 2D and volumetric datasets and demonstrate that the proposed skip connections can outperform the traditional heavy skip connections of 4 different models in terms of segmentation accuracy (2% Dice), memory usage (at least 50%), and the number of network parameters (up to 70%).
CITATION STYLE
Taghanaki, S. A., Bentaieb, A., Sharma, A., Zhou, S. K., Zheng, Y., Georgescu, B., … Hamarneh, G. (2019). Select, Attend, and Transfer: Light, Learnable Skip Connections. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11861 LNCS, pp. 417–425). Springer. https://doi.org/10.1007/978-3-030-32692-0_48
Mendeley helps you to discover research relevant for your work.