Abstract
We articulate the design and implementation of the MS MARCO document ranking and passage ranking leaderboards. In contrast to "standard"community-wide evaluations such as those at TREC, which can be characterized as simultaneous games, leaderboards represent sequential games, where every player move is immediately visible to the entire community. The fundamental challenge with this setup is that every leaderboard submission leaks information about the held-out evaluation set, which conflicts with the fundamental tenant in machine learning about separation of training and test data. These "leaks", accumulated over long periods of time, threaten the validity of the insights that can be derived from the leaderboards. In this paper, we share our experiences grappling with this issue over the past few years and how our considerations are operationalized into a coherent submission policy. Our work provides a useful guide to help the community understand the design choices made in the popular MS MARCO leaderboards and offers lessons for designers of future leaderboards.
Author supplied keywords
Cite
CITATION STYLE
Lin, J., Campos, D., Craswell, N., Mitra, B., & Yilmaz, E. (2022). Fostering Coopetition While Plugging Leaks: The Design and Implementation of the MS MARCO Leaderboards. In SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2939–2948). Association for Computing Machinery, Inc. https://doi.org/10.1145/3477495.3531725
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.