Statistics is nowadays the customary language of functional imaging. It is common to express an experimental setting as a set of null hypotheses over complex models and to present results as maps of p-values derived from sophisticated probability distributions. However, the growing interest in the development of advanced statistical algorithms is not always paralleled by similar attention to how these techniques may regiment the ways in which users draw inferences from their data. This article investigates the logical bases of current statistical approaches in functional imaging and probes their suitability to inductive inference in neuroscience. The frequentist approach to statistical inference is reviewed with attention to its two main constituents: Fisherian "significance testing" and Neyman-Pearson "hypothesis testing". It is shown that these conceptual systems, which are similar in the univariate testing case, dissociate into two quite different methods of inference when applied to the multiple testing problem, the typical framework of functional imaging. This difference is explained with reference to specific issues, like small volume correction, which are most likely to generate confusion in the practitioner. Further insight into this problem is achieved by recasting the multiple comparison problem into a multivariate Bayesian formulation. This formulation introduces a new perspective where the inferential process is more clearly defined in two distinct steps. The first one, inductive in form, uses exploratory techniques to acquire preliminary notions on the spatial patterns and the signal and noise characteristics. The (smaller) set of likely spatial patterns generated is then tested with newer data and a more rigorous multiple hypothesis testing technique (deductive step).
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below