Explaining user performance in information retrieval: Challenges to IR evaluation

18Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The paper makes three points of significance for IR research: (1) The Cranfield paradigm of IR evaluation seems to lose power when one looks at human instead of system performance. (2) Searchers using IR systems in real-life use rather short queries, which individually often have poor performance. However, when used in sessions, they may be surprisingly effective. The searcher's strategies have not been sufficiently described and cannot therefore be properly understood, supported nor evaluated. (3) Searchers in real-life seek to optimize the entire information access process, not just result quality. Evaluation of output alone is insufficient to explain searcher behavior. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Järvelin, K. (2009). Explaining user performance in information retrieval: Challenges to IR evaluation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5766 LNCS, pp. 289–296). https://doi.org/10.1007/978-3-642-04417-5_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free