Neural Net Optimization by Weight-Entropy Monitoring

10Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A novel technique of monitoring the entropy of network weights is proposed for optimizing the multilayer perceptron neural network classifier. The set of weights associated with the input of every perceptron are normalized to a probability distribution, and the entropy of weights is computed for the whole network using chain rule. The synaptic weights being initially random start converging to definite values in epoch training. The stopping criterion for the gradient-based backpropagation (BP) optimization algorithm is defined by the stabilization of the entropy of weights over a time window, even though the cost function continues to steadily decline. In the case of neural networks trained by the gradient-free particle swarm optimization (PSO), the point of convergence is interpreted as the particle position at which the entropy of weights is minimum that corresponds to the most uneven network weight distribution. The entropy used in our experiments is the non-extensive entropy with Gaussian gain that is nonadditive when used in a summation. Experimental results on benchmark datasets from the UCI repository indicate a quicker convergence of the optimization process in both instances, with the high accuracies of classification maintained.

Cite

CITATION STYLE

APA

Susan, S., Ranjan, R., Taluja, U., Rai, S., & Agarwal, P. (2019). Neural Net Optimization by Weight-Entropy Monitoring. In Advances in Intelligent Systems and Computing (Vol. 799, pp. 201–213). Springer Verlag. https://doi.org/10.1007/978-981-13-1135-2_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free