Adaptive effort for search evaluation metrics

12Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We explain a wide range of search evaluation metrics as the ratio of users’ gain to effort for interacting with a ranked list of results. According to this explanation, many existing metrics measure users’ effort as linear to the (expected) number of examined results. This implicitly assumes that users spend the same effort to examine different results. We adapt current metrics to account for different effort on relevant and non-relevant documents. Results show that such adaptive effort metrics better correlate with and predict user perceptions on search quality.

Cite

CITATION STYLE

APA

Jiang, J., & Allan, J. (2016). Adaptive effort for search evaluation metrics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9626, pp. 187–199). Springer Verlag. https://doi.org/10.1007/978-3-319-30671-1_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free