Simplified Information Maximization for Improving Generalization Performance in Multilayered Neural Networks

  • Kamimura R
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

A new type of information-theoretic method is proposed to improve prediction performance in supervised learning. The method has two main technical features. First, the complicated procedures used to increase information content are replaced by the direct use of hidden neuron outputs. Information is controlled by directly changing the outputs of the hidden neurons. In addition, to simultaneously increase information content and decrease errors between targets and outputs, the information acquisition and use phases are separated. In the information acquisition phase, the autoencoder tries to acquire as much information content on input patterns as possible. In the information use phase, information obtained in the acquisition phase is used to train supervised learning. The method is a simplified version of actual information maximization and directly deals with the outputs from neurons. The method was applied to the three data sets, namely, Iris, bankruptcy, and rebel participation data sets. Experimental results showed that the proposed simplified information acquisition method was effective in increasing the real information content. In addition, by using the information content, generalization performance was greatly improved.

Cite

CITATION STYLE

APA

Kamimura, R. (2016). Simplified Information Maximization for Improving Generalization Performance in Multilayered Neural Networks. Mathematical Problems in Engineering, 2016, 1–17. https://doi.org/10.1155/2016/3015087

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free