Significant Improvements over the State of the Art? A Case Study of the MS MARCO Document Ranking Leaderboard

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Leaderboards are a ubiquitous part of modern research in applied machine learning. By design, they sort entries into some linear order, where the top-scoring entry is recognized as the "state of the art"(SOTA). Due to the rapid progress being made today, particularly with neural models, the top entry in a leaderboard is replaced with some regularity. These are touted as improvements in the state of the art. Such pronouncements, however, are almost never qualified with significance testing. In the context of the MS MARCO document ranking leaderboard, we pose a specific question: How do we know if a run is significantly better than the current SOTA? Against the backdrop of recent IR debates on scale types, our study proposes an evaluation framework that explicitly treats certain outcomes as distinct and avoids aggregating them into a single-point metric. Empirical analysis of SOTA runs from the MS MARCO document ranking leaderboard reveals insights about how one run can be "significantly better"than another that are obscured by the current official evaluation metric (MRR@100).

Cite

CITATION STYLE

APA

Lin, J., Campos, D., Craswell, N., Mitra, B., & Yilmaz, E. (2021). Significant Improvements over the State of the Art? A Case Study of the MS MARCO Document Ranking Leaderboard. In SIGIR 2021 - Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2283–2287). Association for Computing Machinery, Inc. https://doi.org/10.1145/3404835.3463034

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free