Designs of trials assessing interventions to improve the peer review process: A vignette-based survey

9Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background: We aimed to determine the best study designs for assessing interventions to improve the peer review process according to experts' opinions. Furthermore, for interventions previously evaluated, we determined whether the study designs actually used were rated as the best study designs. Methods: Study design: A series of six vignette-based surveys exploring the best study designs for six different interventions (training peer reviewers, adding an expert to the peer review process, use of reporting guidelines checklists, blinding peer reviewers to the results (i.e., results-free peer review), giving incentives to peer reviewers, and post-publication peer review). Vignette construction: Vignettes were case scenarios of trials assessing interventions aimed at improving the quality of peer review. For each intervention, the vignette included the study type (e.g., randomized controlled trial [RCT]), setting (e.g., single biomedical journal), and type of manuscript assessed (e.g., actual manuscripts received by the journal); each of these three features varied between vignettes. Participants: Researchers with expertise in peer review or methodology of clinical trials. Outcome: Participants were proposed two vignettes describing two different study designs to assess the same intervention and had to indicate which study design they preferred on a scale, from - 5 (preference for study A) to 5 (preference for study B), 0 indicating no preference between the suggested designs (primary outcome). Secondary outcomes were trust in the results and feasibility of the designs. Results: A total of 204 experts assessed 1044 paired comparisons. The preferred study type was RCTs with randomization of manuscripts for four interventions (adding an expert, use of reporting guidelines checklist, results-free peer review, post-publication peer review) and RCTs with randomization of peer reviewers for two interventions (training peer reviewers and using incentives). The preferred setting was mainly several biomedical journals from different publishers, and the preferred type of manuscript was actual manuscripts submitted to journals. However, the most feasible designs were often cluster RCTs and interrupted time series analysis set in a single biomedical journal, with the assessment of a fabricated manuscript. Three interventions were previously assessed: none used the design rated first in preference by experts. Conclusion: The vignette-based survey allowed us to identify the best study designs for assessing different interventions to improve peer review according to experts' opinion. There is gap between the preferred study designs and the designs actually used.

Cite

CITATION STYLE

APA

Heim, A., Ravaud, P., Baron, G., & Boutron, I. (2018, October 15). Designs of trials assessing interventions to improve the peer review process: A vignette-based survey. BMC Medicine. BioMed Central Ltd. https://doi.org/10.1186/s12916-018-1167-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free