Multilayer perceptrons as classifiers guided by mutual information and trained with genetic algorithms

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multilayer perceptrons can be trained with several algorithms and with different quantities that correlate the expected output and the achieved state. Among the most common of those quantities is the mean square error, but information-theoretic quantities have been applied with great success. A common scheme to train multilayer perceptrons is based in evolutionary computing, as a counterpart of the commonly applied backpropagation algorithm. In this contribution we evaluated the performance of multilayer perceptrons as classifiers when trained with genetic algorithms and applying mutual information between the label obtained by the network and the expected class. We propose a classification algorithm in which each input variable is substituted by a function of it such that mutual information from the new function to the label is maximized. Next, those approximated functions are fed as input to a multilayer perceptron in charge of learning the classification map, trained with genetic algorithms and guided by mutual information. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Neme, A., Hernández, S., Nido, A., & Islas, C. (2012). Multilayer perceptrons as classifiers guided by mutual information and trained with genetic algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7435 LNCS, pp. 176–183). https://doi.org/10.1007/978-3-642-32639-4_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free