Simple and stable internal representation by potential mutual information maximization

6Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The present paper aims to interpret final representations obtained by neural networks by maximizing the mutual information between neurons and data sets. Because complex procedures are needed to maximize information, the computational procedures are simplified as much as possible using the present method. The simplification lies in realizing mutual information maximization indirectly by focusing on the potentiality of neurons. The method was applied to restaurant data for which the ordinary regression analysis could not show good performance. For this problem, we tried to interpret final representations and obtain improved generalization performance. The results revealed a simple configuration where just a single important feature was extracted to explicitly explain the motivation to visit the restaurant.

Cite

CITATION STYLE

APA

Kamimura, R. (2016). Simple and stable internal representation by potential mutual information maximization. In Communications in Computer and Information Science (Vol. 629, pp. 309–316). Springer Verlag. https://doi.org/10.1007/978-3-319-44188-7_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free