Building sparse deep feedforward networks using tree receptive fields

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.

Cite

CITATION STYLE

APA

Li, X., Chen, Z., & Zhang, N. L. (2018). Building sparse deep feedforward networks using tree receptive fields. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 5045–5051). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/700

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free