A Bayesian perspective on estimating mean, variance, and standard-deviation from data

  • Oliphant T
N/ACitations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

After reviewing some classical estimators for mean, variance, and standard-deviation and showing that un-biased estimates are not usually desirable, a Bayesian perspective is employed to determine what is known about mean, variance, and standard deviation given only that a data set in-fact has a common mean and variance. Maximum-entropy is used to argue that the likelihood function in this situation should be the same as if the data were independent and identically distributed Gaussian. A noninformative prior is derived for the mean and variance and Bayes rule is used to compute the posterior Probability Density Function (PDF) of (; ) as well as ; 2 in terms of the sucient statistics x = 1 n P i xi and C = 1 n P i (xi x)2 : From the joint distribution marginals are determined. It is shown that x pC pn 1 is distributed as Student-t with n 1 degrees of freedom, p 2 nC is distributed as generalized-gamma with c = 2 and a = n 1 2 ; and 2 2 nC is distributed as inverted-gamma with a = n 1 2 : It is suggested to report the mean of these distributions as the estimate (or the peak if n is too small for the mean to be de ned) and a con dence interval surrounding the median.

Cite

CITATION STYLE

APA

Oliphant, T. E. (2006). A Bayesian perspective on estimating mean, variance, and standard-deviation from data. All Faculty Publications, (278), 1–17. Retrieved from http://scholarsarchive.byu.edu/facpub/278/?utm_source=scholarsarchive.byu.edu/facpub/278&utm_medium=PDF&utm_campaign=PDFCoverPages

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free