Fast structure learning for deep feedforward networks via tree skeleton expansion

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the popularity of deep learning, structure learning for deep models remains a relatively under-explored area. In contrast, structure learning has been studied extensively for probabilistic graphical models (PGMs). In particular, an efficient algorithm has been developed for learning a class of tree-structured PGMs called hierarchical latent tree models (HLTMs), where there is a layer of observed variables at the bottom and multiple layers of latent variables on top. In this paper, we propose a simple unsupervised method for learning the structures of feedforward neural networks (FNNs) based on HLTMs. The idea is to expand the connections in the tree skeletons from HLTMs and to use the resulting structures for FNNs. Our method is very fast and it yields deep structures of virtually the same quality as those produced by the very time-consuming grid search method.

Cite

CITATION STYLE

APA

Chen, Z., Li, X., Tian, Z., & Zhang, N. L. (2019). Fast structure learning for deep feedforward networks via tree skeleton expansion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11726 LNAI, pp. 277–289). Springer Verlag. https://doi.org/10.1007/978-3-030-29765-7_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free