A decorrelation approach for pruning of multilayer perceptron networks

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, the architecture selection of a three-layer nonlinear feedforward network with linear output neurons and sigmoidal hidden neurons is carried out. In the proposed method, the conventional back propagation (BP) learning algorithm is used to train the network by minimizing the representation error. A new pruning algorithm employing statistical analysis can quantify the importance of each hidden unit. This is accomplished by providing lateral connections among the neurons of the hidden layer and minimizing the variance of the hidden neurons. Variance minimization has resulted in decorrelated neurons and thus the learning rule for the lateral connections in the hidden layer becomes a variation of the anti-Hebbian learning. The decorrelation process minimizes any redundant information transferred among the hidden neurons and therefore enables the network to capture the statistical properties of the required input-output mapping using the minimum number of hidden nodes. Hidden nodes with least contribution to the error minimization at the output layer will be pruned. Experimental results show that the proposed pruning algorithm correctly prunes irrelevant hidden units.

Cite

CITATION STYLE

APA

Abbas, H. M. (2014). A decorrelation approach for pruning of multilayer perceptron networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8774, pp. 12–22). Springer Verlag. https://doi.org/10.1007/978-3-319-11656-3_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free