Bootstrap Tests of Hypotheses

  • Ventura V
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Simulation-based calculation of p-values is an important technique in situations where it is difficult to obtain an exact or approximate distribution for a test, or when such an approximation exists but is of dubious validity, either because the conditions it requires are not met, or because it relies on questionable assumptions about the distribution of the data. But even in applications where we are fairly confident in a particular parametric model and the statistical analysis based on that model, it can still be helpful, in the spirit of robustness, to see what can be inferred from the data without particular parametric model assumptions. A substantial literature has demonstrated both theoretically and in numerical studies that the bootstrap is widely effective (Davison and Hinkley in Bootstrap methods and their applications. Cambridge University Press, Cambridge, 1997; Efron and Tibshirani in An introduction to the bootstrap. Chapman and Hall, New York, 1993). But the simplicity of the bootstrap conceals an important point: its properties depend on how resampling is done; arbitrary shuffles of the data do not necessarily accomplish desired statistical goals. Moreover, in the context of hypotheses testing, the p-value must be obtained under the hypothetical reality imposed by the null hypothesis. In this chapter, we review the general framework for statistical tests of hypotheses, and introduce the basics for Monte Carlo, permutation, and bootstrap tests.

Cite

CITATION STYLE

APA

Ventura, V. (2010). Bootstrap Tests of Hypotheses. In Analysis of Parallel Spike Trains (pp. 383–398). Springer US. https://doi.org/10.1007/978-1-4419-5675-0_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free