MaxEntropy Pursuit Variational Inference

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the core problems in variational inference is a choice of approximate posterior distribution. It is crucial to trade-off between efficient inference with simple families as mean-field models and accuracy of inference. We propose a variant of a greedy approximation of the posterior distribution with tractable base learners. Using Max-Entropy approach, we obtain a well-defined optimization problem. We demonstrate the ability of the method to capture complex multimodal posterior via continual learning setting for neural networks.

Cite

CITATION STYLE

APA

Egorov, E., Neklydov, K., Kostoev, R., & Burnaev, E. (2019). MaxEntropy Pursuit Variational Inference. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11554 LNCS, pp. 409–417). Springer Verlag. https://doi.org/10.1007/978-3-030-22796-8_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free