SelectScale: Mining more patterns from images via selective and soft dropout

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Convolutional neural networks (CNNs) have achieved remarkable success in image recognition. Although the internal patterns of the input images are effectively learned by the CNNs, these patterns only constitute a small proportion of useful patterns contained in the input images. This can be attributed to the fact that the CNNs will stop learning if the learned patterns are enough to make a correct classification. Network regularization methods like dropout and SpatialDropout can ease this problem. During training, they randomly drop the features. These dropout methods, in essence, change the patterns learned by the networks, and in turn, forces the networks to learn other patterns to make the correct classification. However, the above methods have an important drawback. Randomly dropping features is generally inefficient and can introduce unnecessary noise. To tackle this problem, we propose SelectScale. Instead of randomly dropping units, SelectScale selects the important features in networks and adjusts them during training. Using SelectScale, we improve the performance of CNNs on CIFAR and ImageNet.

Cite

CITATION STYLE

APA

Chen, Z., Niu, J., Liu, X., & Tang, S. (2020). SelectScale: Mining more patterns from images via selective and soft dropout. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 523–529). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free