This article argues the realist critique of experimental design to evaluate interventions in complex social systems is valid but incomplete. It argues for experimental approaches to testing realist theory and for estimating effect sizes. The paper aims to provide a means for scientific understanding of the relative value of interventions in different contexts, for whom and to what effect. The paper is grounded in a realist philosophy of science and a realist approach to evaluation. It argues for the use of experimental design to test and estimate the magnitude of an outcome in a hypothesised realist Context-Mechanism-Outcome (CMO) configuration. The approach requires that program theory (rather than the program) is the unit of analysis. It also requires that context – crucial for a mechanism firing – is brought into the effect size equation, while at the same time attempts are made to control for the effects of other mechanisms. The focus of this paper is on the general approach rather than a particular method. The approach was applied in an evaluation of a youth mentoring program. The method used was a matched-pair, pre and post-test, control group quasi-experimental design. The results of our application of the approach were limited but provided insight about the extent to which a particular mentoring mechanism, when properly targeted, could generate outcomes for certain students. This approach to evaluation is consistent with underlying principles of scientific realism and theory testing and provides a means for generating evidence about the value of interventions in complex social systems, for whom and to what extent.
Hawkins, A. (2014). The case for experimental design in realist evaluation. Learning Communities: International Journal of Learning in Social Contexts, 14, 46–59. https://doi.org/10.18793/lcj2014.14.04