An Ancillarity Paradox Which Appears in Multiple Linear Regression

  • Brown L
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Consider a multiple linear regression in which Yi,i=1,⋯,n, are independent normal variables with variance σ2 and E(Yi)=α+V′iβ, where Vi∈Rr and β∈Rr. Let α^ denote the usual least squares estimator of α. Suppose that Vi are themselves observations of independent multivariate normal random variables with mean 0 and known, nonsingular covariance matrix θ. Then α^ is admissible under squared error loss if r≥2. Several estimators dominating α^ when r≥3 are presented. Analogous results are presented for the case where σ2 or θ are unknown and some other generalizations are also considered. It is noted that some of these results for r≥3 appear in earlier papers of Baranchik and of Takada. {Vi} are ancillary statistics in the above setting. Hence admissibility of α^ depends on the distribution of the ancillary statistics, since if {Vi} is fixed instead of random, then α^ is admissible. This fact contradicts a widely held notion about ancillary statistics; some interpretations and consequences of this paradox are briefly discussed.

Cite

CITATION STYLE

APA

Brown, L. D. (2007). An Ancillarity Paradox Which Appears in Multiple Linear Regression. The Annals of Statistics, 18(2). https://doi.org/10.1214/aos/1176347602

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free