A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

  • Chernoff H
N/ACitations
Citations of this article
322Readers
Mendeley users who have this article in their library.

Abstract

In many cases an optimum or computationally convenient test of a simple hypothesis H0 against a simple alternative H1 may be given in the following form. Reject H0 if Sn = ∑n j=1 Xj ≤ k, where X1, X2, ⋯, Xn are n independent observations of a chance variable X whose distribution depends on the true hypothesis and where k is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index ρ. If ρ1 and ρ2 are the indices corresponding to two alternative tests e = log ρ1/log ρ2 measures the relative efficiency of these tests in the following sense. For large samples, a sample of size n with the first test will give about the same probabilities of error as a sample of size en with the second test. To obtain the above result, use is made of the fact that P(Sn ≤ na) behaves roughly like mn where m is the minimum value assumed by the moment generating function of X - a. It is shown that if H0 and H1 specify probability distributions of X which are very close to each other, one may approximate ρ by assuming that X is normally distributed.

Cite

CITATION STYLE

APA

Chernoff, H. (1952). A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations. The Annals of Mathematical Statistics, 23(4), 493–507. https://doi.org/10.1214/aoms/1177729330

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free