Learning general latent-variable graphical models with predictive belief propagation

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Learning general latent-variable probabilistic graphical models is a key theoretical challenge in machine learning and artificial intelligence. All previous methods, including the EM algorithm and the spectral algorithms, face severe limitations that largely restrict their applicability and affect their performance. In order to overcome these limitations, in this paper we introduce a novel formulation of message-passing inference over junction trees named predictive belief propagation, and propose a new learning and inference algorithm for general latent-variable graphical models based on this formulation. Our proposed algorithm reduces the hard parameter learning problem into a sequence of supervised learning problems, and unifies the learning of different kinds of latent graphical models into a single learning framework, which is local-optima-free and statistically consistent. We then give a proof of the correctness of our algorithm and show in experiments on both synthetic and real datasets that our algorithm significantly outperforms both the EM algorithm and the spectral algorithm while also being orders of magnitude faster to compute.

Cite

CITATION STYLE

APA

Wang, B., & Gordon, G. (2020). Learning general latent-variable graphical models with predictive belief propagation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6118–6126). AAAI press. https://doi.org/10.1609/aaai.v34i04.6076

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free