Reproducible biomedical benchmarking in the cloud: Lessons from crowd-sourced data challenges

15Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Challenges are achieving broad acceptance for addressing many biomedical questions and enabling tool assessment. But ensuring that the methods evaluated are reproducible and reusable is complicated by the diversity of software architectures, input and output file formats, and computing environments. To mitigate these problems, some challenges have leveraged new virtualization and compute methods, requiring participants to submit cloud-ready software packages. We review recent data challenges with innovative approaches to model reproducibility and data sharing, and outline key lessons for improving quantitative biomedical data analysis through crowd-sourced benchmarking challenges.

Cite

CITATION STYLE

APA

Ellrott, K., Buchanan, A., Creason, A., Mason, M., Schaffter, T., Hoff, B., … Guinney, J. (2019). Reproducible biomedical benchmarking in the cloud: Lessons from crowd-sourced data challenges. Genome Biology, 20(1). https://doi.org/10.1186/s13059-019-1794-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free