Predictive uncertainty estimation for tractable deep probabilistic models

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Tractable Deep Probabilistic Models (TPMs) are generative models based on arithmetic circuits that allow for exact marginal inference in linear time. These models have obtained promising results in several machine learning tasks. Like many other models, TPMs can produce over-confident incorrect inferences, especially on regions with small statistical support. In this work, we will develop efficient estimators of the predictive uncertainty that are robust to data scarcity and outliers. We investigate two approaches. The first approach measures the variability of the output to perturbations of the model weights. The second approach captures the variability of the prediction to changes in the model architecture. We will evaluate the approaches on challenging tasks such as image completion and multilabel classification.

Cite

CITATION STYLE

APA

Llerena, J. V. (2020). Predictive uncertainty estimation for tractable deep probabilistic models. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 5210–5211). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/745

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free