Abstract
"the whole of medicine depends on the transparent reporting of clinical trials" 1 S OCIETY often holds in high esteem individuals who present themselves, and their interests, in the most positive and optimistic light even at the expense of deception. 2 And it appears that such behaviours are similarly rewarded in the healthcare research community. Statistically positive results, namely, those passing a critical value, arbitrarily set, typically at 5%, enter the holy grail. Statistically negative results, those not passing the arbitrarily set critical value, are important and should be considered equally and reported alongside statistically positive results. Statistically negative results can, and often do, provide important clinical information about the intervention under investigation. People volunteering to participate in research, particularly those agreeing to be allocated to an intervention, by chance (i.e., randomized), often with potential of substantial health risks, expect that the information gleaned from their involvement will have one of several possible outcomes. Most immediately it might improve their health; and it might provide accumulating information about the benefits and harms of the intervention under consideration. While such expectations are a minimum they can only be realized if the data is actually reported. As we enter a new millennium such minimally reasonable expectations , sadly, do not always happen. More than 40 years ago a researcher at a Canadian university reviewing the results from 294 reports, published in four leading psychology journals, in 1958, observed that the vast majority of them-97.3%-reported statistically positive results. 3 Extending the study to include 456 reports of research published in three leading healthcare journals, in 1986, provided almost identical results. 3 This propensity to report statistically significant results is known as publication bias. The existence of publication bias has been found, repeatedly, for a variety of research designs, including randomized controlled trials, (RCTs) considered an important form of evidence to help healthcare providers and consumers make more informed decision making. We can now say with confidence that this bias is almost ubiquitous across many jurisdictions. In this issue of the Journal, Hall and colleagues provide further evidence of the existence of publication bias, from a large Eastern Canadian region. 4 Publication bias has many ways of expressing itself. Approximately 40% of reports initially presented at scientific meetings are not reported as full publications with the biggest publication predictor being whether the conference abstract reports a statistically significant result; 5 authors are less likely to write-up for publication the results of statistically negative results; 6 such results are less likely to be accepted for publication by journals; 6 and journal peer reviewers are more likely to recommend publication of reports with statistically positive results. 7 Even when publication is sought, reports with statistically negative results take considerably longer to be published compared to reports with statistically positive results; 8 and pharmaceutical companies, who fund a substantial number of RCTs, are more likely to seek publication of results with statistically positive outcomes. 9 The dire consequences of publication bias were demonstrated more than 20 years ago. Using a clinical trials registry, containing both published and unpublished reports, Simes 10 analyzed trials of alkylating 331 CAN J ANESTH 54: 5
Cite
CITATION STYLE
Moher, D. (2007). Reporting research results: A moral obligation for all researchers. Canadian Journal of Anesthesia/Journal Canadien d’anesthésie, 54(5), 331–335. https://doi.org/10.1007/bf03022653
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.