One of the core problems in variational inference is a choice of approximate posterior distribution. It is crucial to trade-off between efficient inference with simple families as mean-field models and accuracy of inference. We propose a variant of a greedy approximation of the posterior distribution with tractable base learners. Using Max-Entropy approach, we obtain a well-defined optimization problem. We demonstrate the ability of the method to capture complex multimodal posterior via continual learning setting for neural networks.
CITATION STYLE
Egorov, E., Neklydov, K., Kostoev, R., & Burnaev, E. (2019). MaxEntropy Pursuit Variational Inference. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11554 LNCS, pp. 409–417). Springer Verlag. https://doi.org/10.1007/978-3-030-22796-8_43
Mendeley helps you to discover research relevant for your work.