Challenge Results are not Reproducible

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While clinical trials are the state-of-the-art methods to assess the effect of new medication in a comparative manner, benchmarking in the field of medical image analysis is performed by so-called challenges. Recently, comprehensive analysis of multiple biomedical image analysis challenges revealed large discrepancies between the impact of challenges and quality control of the design and reporting standard. This work aims to follow up on these results and attempts to address the specific question of the reproducibility of the participants methods. In an effort to determine whether alternative interpretations of the method description may change the challenge ranking, we reproduced the algorithms submitted to the 2019 Robust medical image segmentation challenge (ROBUST-MIS). The leaderboard differed substantially between the original challenge and reimplementation, indicating that challenge rankings may not be sufficiently reproducible1.

Cite

CITATION STYLE

APA

Reinke, A., Grab, G., & Maier-Hein, L. (2023). Challenge Results are not Reproducible. In Informatik aktuell (pp. 198–203). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-658-41657-7_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free