A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are generated. After the performance against the series is evaluated, normalized and averaged, the compromising rule is found. It consists in non-compactly inserting a few DropOut layers before the last convolutional layer. It is likely that the scheme with two or more DropOut layers fits networks of many convolutional layers for image classification problems with a plenty of features. Such a scheme shall also fit simple datasets prone to overfitting. In fact, the rule “prefers” a fewer number of DropOut layers. The exemplary gain of the rule application is roughly between 10 % and 50 %.
CITATION STYLE
Romanuke, V. V. (2017). Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets. Applied Computer Systems, 22(1), 54–63. https://doi.org/10.1515/acss-2017-0018
Mendeley helps you to discover research relevant for your work.