An axiomatic analysis of diversity evaluation metrics: Introducing the rank-biased utility metric

37Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many evaluation metrics have been defined to evaluate the effectiveness ad-hoc retrieval and search result diversification systems. However, it is often unclear which evaluation metric should be used to analyze the performance of retrieval systems given a specific task. Axiomatic analysis is an informative mechanism to understand the fundamentals of metrics and their suitability for particular scenarios. In this paper, we define a constraint-based axiomatic framework to study the suitability of existing metrics in search result diversification scenarios. The analysis informed the definition of Rank-Biased Utility (RBU) - an adaptation of the well-known Rank-Biased Precision metric - that takes into account redundancy and the user effort associated to the inspection of documents in the ranking. Our experiments over standard diversity evaluation campaigns show that the proposed metric captures quality criteria reflected by different metrics, being suitable in the absence of knowledge about particular features of the scenario under study.

Cite

CITATION STYLE

APA

Amigó, E., Spina, D., & Carrillo-De-Albornoz, J. (2018). An axiomatic analysis of diversity evaluation metrics: Introducing the rank-biased utility metric. In 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018 (pp. 625–634). Association for Computing Machinery, Inc. https://doi.org/10.1145/3209978.3210024

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free