Solving the vanishing information problem with repeated potential mutual information maximization

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The present paper shows how to solve the problem of vanishing information in potential mutual information maximization. We have previously developed a new information-theoretic method called “potential learning” which aims to extract the most important features through simplified information maximization. However, one of the major problems is that the potential effect diminishes considerably in the course of learning and it becomes impossible to take into account the potentiality in learning. To solve this problem, we here introduce repeated information maximization. To enhance the processes of information maximization, the method forces the potentiality to be assimilated in learning every time it becomes ineffective. The method was applied to the online article popularity data set to estimate the popularity of articles. To demonstrate the effectiveness of the method, the number of hidden neurons was made excessively large and set to 50. The results show that the potentiality information maximization could increase mutual information even with 50 hidden neurons, and lead to improved generalization performance. In addition, simplified representations could be obtained for better interpretation and generalization.

Cite

CITATION STYLE

APA

Kamimura, R. (2016). Solving the vanishing information problem with repeated potential mutual information maximization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9950 LNCS, pp. 442–451). Springer Verlag. https://doi.org/10.1007/978-3-319-46681-1_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free