Recursive extraction of modular structure from layered neural networks using variational bayes method

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks have made a substantial contribution to the recognition and prediction of complex data in various fields, such as image processing, speech recognition and bioinformatics. However, it is very difficult to discover knowledge from the inference provided by a neural network, since its internal representation consists of many nonlinear and hierarchical parameters. To solve this problem, an approach has been proposed that extracts a global and simplified structure for a neural network. Although it can successfully detect such a hidden modular structure, its convergence is not sufficiently stable and is vulnerable to the initial parameters. In this paper, we propose a new deep learning algorithm that consists of recursive back propagation, community detection using a variational Bayes, and pruning unnecessary connections. We show that the proposed method can appropriately detect a hidden inference structure and compress a neural network without increasing the generalization error.

Cite

CITATION STYLE

APA

Watanabe, C., Hiramatsu, K., & Kashino, K. (2017). Recursive extraction of modular structure from layered neural networks using variational bayes method. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10558 LNAI, pp. 207–222). Springer Verlag. https://doi.org/10.1007/978-3-319-67786-6_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free