Prediction in M-complete problems with limited sample size

12Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

We define a new Bayesian predictor called the posterior weighted median (PWM) and compare its performance to several other predictors includ- ing the Bayes model average under squared error loss, the Barbieri-Berger me- dian model predictor, the stacking predictor, and the model average predictor based on Akaike's information criterion. We argue that PWM generally gives better performance than other predictors over a range of M-complete problems. This range is between the M-closed-M-complete boundary and the M-complete- M-open boundary. Indeed, as a problem gets closer to M-open, it seems that M-complete predictive methods begin to break down. Our comparisons rest on extensive simulations and real data examples. As a separate issue, we introduce the concepts of the 'Bail out effect' and the 'Bail in effect'. These occur when a predictor gives not just poor results but defaults to the simplest model ('bails out') or to the most complex model ('bails in') on the model list. Either can occur inM-complete problems when the complexity of the data generator is too high for the predictor scheme to accommodate. © 2013 International Society for Bayesian Analysis.

Cite

CITATION STYLE

APA

Clarke, J. L., Clarke, B., & Yu, C. W. (2013). Prediction in M-complete problems with limited sample size. Bayesian Analysis, 8(3), 647–690. https://doi.org/10.1214/13-BA826

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free