Regularization of deep neural networks with spectral dropout

78Citations
Citations of this article
133Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The big breakthrough on the ImageNet challenge in 2012 was partially due to the ‘Dropout’ technique used to avoid overfitting. Here, we introduce a new approach called ‘Spectral Dropout’ to improve the generalization ability of deep neural networks. We cast the proposed approach in the form of regular Convolutional Neural Network (CNN) weight layers using a decorrelation transform with fixed basis functions. Our spectral dropout method prevents overfitting by eliminating weak and ‘noisy’ Fourier domain coefficients of the neural network activations, leading to remarkably better results than the current regularization methods. Furthermore, the proposed is very efficient due to the fixed basis functions used for spectral transformation. In particular, compared to Dropout and Drop-Connect, our method significantly speeds up the network convergence rate during the training process (roughly ×2), with considerably higher neuron pruning rates (an increase of ∼30%). We demonstrate that the spectral dropout can also be used in conjunction with other regularization approaches resulting in additional performance gains.

Cite

CITATION STYLE

APA

Khan, S. H., Hayat, M., & Porikli, F. (2019). Regularization of deep neural networks with spectral dropout. Neural Networks, 110, 82–90. https://doi.org/10.1016/j.neunet.2018.09.009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free