In this chapter, some asymptotic optimality theory of hypothesis testing is developed. We consider testing one sequence of distributions against another (the asymptotic version of testing a simple hypothesis against a simple alternative). It turns out that this problem degenerates if the two sequences are too close together or too far apart. The non-degenerate situation can be characterized in terms of a suitable distance or metric between the distributions of the two sequences. Two such metrics, the total variation and the Hellinger metric, will be introduced below. We begin by considering some of the basic metrics for probability distributions that are useful in statistics. Fundamental inequalities relating these metrics are developed, from which some large sample implications can be derived. We now recall the definition of a metric space; also see Section A.2 in the appendix.
CITATION STYLE
Large Sample Optimality. (2005). In Testing Statistical Hypotheses (pp. 527–582). Springer New York. https://doi.org/10.1007/0-387-27605-x_13
Mendeley helps you to discover research relevant for your work.