Assessment of locally influential observations in Bayesian models

17Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

In models with conditionally independent observations, it is shown that the posterior variance of the log-likelihood from observation i is a measure of that observation's local influence. This result is obtained by considering the Kullback-Leibler divergence between baseline and case-weight perturbed posteri- ors, with local influence being the curvature of this divergence evaluated at the baseline posterior. Case-weighting is formulated using quasi-likelihood and hence for binomial or Poisson observations, the posterior variance of an observation's log-likelihood provides a measure of sensitivity to mild mis-specification of its dis- persion. In general, the case-weighted posteriors are quasi-posteriors because they do not arise from a formal sampling model. Their propriety is established under a simple sufficient condition. A second local measure of posterior change, the cur- vature of the Kullback-Leibler divergence between predictive densities, is seen to be the posterior variance (over future observations) of the expected log-likelihood, and can easily be estimated using importance sampling. Suggestions for identi- fying locally influential observations are given. The methodology is applied to a well known simple linear model dataset, to a nonlinear state-space model, and to a random-effects binary response model. © 2007 International Society for Bayesian Analysis.

Cite

CITATION STYLE

APA

Millar, R. B., & Stewart, W. S. (2007). Assessment of locally influential observations in Bayesian models. Bayesian Analysis, 2(2), 365–384. https://doi.org/10.1214/07-BA216

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free