Bayesian shrinkage methods for partially observed data with many predictors

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Motivated by the increasing use of and rapid changes in array technologies, we consider the prediction problem of fitting a linear regression relating a continuous outcome Y to a large number of covariates X, for example, measurements from current, state-of-the-art technology. For most of the samples, only the outcome Y and surrogate covariates, W, are available. These surrogates may be data from prior studies using older technologies. Owing to the dimension of the problem and the large fraction of missing information, a critical issue is appropriate shrinkage of model parameters for an optimal bias-variance trade-off. We discuss a variety of fully Bayesian and Empirical Bayes algorithms which account for uncertainty in the missing data and adaptively shrink parameter estimates for superior prediction. These methods are evaluated via a comprehensive simulation study. In addition, we apply our methods to a lung cancer data set, predicting survival time (Y) using qRTPCR (X) and microarray (W) measurements. © Institute of Mathematical Statistics, 2013.

Cite

CITATION STYLE

APA

Boonstra, P. S., Mukherjee, B., & Taylor, J. M. G. (2013). Bayesian shrinkage methods for partially observed data with many predictors. Annals of Applied Statistics, 7(4), 2272–2292. https://doi.org/10.1214/13-AOAS668

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free