Bayesian inference for inverse problems occurring in uncertainty analysis

9Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

The inverse problem considered here is the estimation of the distribution of a nonobserved random variable X, linked through a time-consuming physical modelH to some noisy observed data Y. Bayesian inference is considered to account for prior expert knowledge on X in a small sample size setting. A Metropolis-Hastings-within-Gibbs algorithm is used to compute the posterior distribution of the parameters of the distribution of X through a data augmentation process. Since running H is quite expensive, this inference is achieved by a kriging emulator interpolating H from a numerical design of experiments (DOE). This approach involves several errors of different natures and, in this article, we pay effort to measure and reduce the possible impact of those errors. In particular, we propose to use the so-called DAC criterion to assess in the same exercise the relevance of the DOE and the prior distribution. After describing the calculation of this criterion for the emulator at hand, its behavior is illustrated on numerical experiments.

Cite

CITATION STYLE

APA

Fu, S., Celeux, G., Bousquet, N., & Couplet, M. (2015). Bayesian inference for inverse problems occurring in uncertainty analysis. International Journal for Uncertainty Quantification, 5(1), 73–98. https://doi.org/10.1615/Int.J.UncertaintyQuantification.2014011073

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free