Numerous approaches address over-fitting in neural networks: by imposing a penalty on the parameters of the network (L1, L2, etc.); by changing the network stochastically (drop-out, Gaussian noise, etc.); or by transforming the input data (batch normalization, etc.). In contrast, we aim to ensure that a minimum amount of supporting evidence is present when fitting the model parameters to the training data. This, at the single neuron level, is equivalent to ensuring that both sides of the separating hyperplane (for a standard artificial neuron) have a minimum number of data points, noting that these points need not belong to the same class for the inner layers. We firstly benchmark the results of this approach on the standard Fashion-MINST dataset, comparing it to various regularization techniques. Interestingly, we note that by nudging each neuron to divide, at least in part, its input data, the resulting networks make use of each neuron, avoiding a hyperplane completely on one side of its input data (which is equivalent to a constant into the next layers). To illustrate this point, we study the prevalence of saturated nodes throughout training, showing that neurons are activated more frequently and earlier in training when using this regularization approach. A direct consequence of the improved neuron activation is that deep networks are now easier to train. This is crucially important when the network topology is not known a priori and fitting often remains stuck in a suboptimal local minima. We demonstrate this property by training a network of increasing depth (and constant width); most regularization approaches will result in increasingly frequent training failures (over different random seeds), whilst the proposed evidence-based regularization significantly outperforms in its ability to train deep networks.
CITATION STYLE
Nuti, G., Cross, A. I., & Rindler, P. (2022). Evidence-Based Regularization for Neural Networks. Machine Learning and Knowledge Extraction, 4(4), 1011–1023. https://doi.org/10.3390/make4040051
Mendeley helps you to discover research relevant for your work.