We wish to comment on two aspects of Nancy Cartwright's thoughts about the use of the results of research to guide decisions in policy and practice.1<br />First, we believe that to imply that this challenge can be addressed by using individual studies as the starting point is incorrect. Only very rarely will the results of a single trial provide a sufficiently reliable guide for policy or practice. Single trials (or other studies) addressing a particular question usually have implications for research, not practice. Where more than one similar trial is available, the appropriate starting point for assessing applicability in practice should be systematic reviews of all the relevant individual studies.<br />Second, in trying to judge whether interventions studied in research “will work for us”, Cartwright, like many others, conceptualises the challenge as being to demonstrate that the characteristics and circumstances of the research are sufficiently similar to those to which extrapolation is being contemplated. But why should the challenge be conceptualised that way round? Why not instead ask “Are there any good reasons to believe that the research is not relevant to us, that ‘it won't work for us’?”2 If there are not, and considering the undesirable alternative ways of reaching a decision, the default position should be that the result should be regarded as applicable. Fletcher3 has made a similar point in relation to subgroup analyses, suggesting that a good working assumption is that the main result probably applies to everyone, unless good evidence exists to the contrary.<br />We declare that we have no conflicts of interest.
Petticrew, M., & Chalmers, I. (2011, November 12). Use of research evidence in practice. The Lancet. https://doi.org/10.1016/S0140-6736(11)61735-2