Statistical Information and Likelihood*

  • Basu D
N/ACitations
Citations of this article
22Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In part one of this essay the notion of 'statistical information generated by a data' is formulated in terms of some intuitively appealing principles of data analysis. The author comes out very strongly in favour of the unrestricted likelihood principle after demonstrating (to his own satisfaction) the reasonableness of the Bayes-Fisher postulate that, within the framework of a particular statistical model, the 'whole of the relevant information in the data' must be supposed to be summarised in the likelihood function generated by the data. Part two begins with a brief discussion on some non-Bayesian methods of data analysis that originated in the writings of R. A. Fisher. The central Fisher-thesis on likelihood that it is only a point function is challenged. The principle of maximum likelihood is questioned and the limitations of the method exposed. Part three of the essay is woven around some paradoxical counter examples. The author demonstrates (again to his own satisfaction) how such examples discredit the fiducial argument, underline the impropriety of improper Bayesianism, expose the naivety of standard statistical practices like (pin-point) null-hypothesis testing, 3σ interval estimates, etc. and how at the same time they illuminate and strengthen the likelihood principle by putting it into its true Bayesian perspective.

Cite

CITATION STYLE

APA

Basu, D. (2011). Statistical Information and Likelihood*. In Selected Works of Debabrata Basu (pp. 207–277). Springer New York. https://doi.org/10.1007/978-1-4419-5825-9_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free