User's Knowledge and Information Needs in Information Retrieval Evaluation

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The existing evaluation measures for information retrieval algorithms still lack awareness about the user's cognitive aspects and their dynamics. They often consider an isolated query-document environment and ignore the user's previous knowledge and his/her motivation behind the query. The retrieval algorithms and evaluation measures that account for those factors limit the result's relevance to one search session, one query, or one search goal. We present a novel evaluation measure that overcomes this limitation. The framework measures the relevance of a result/document by examining its content and assessing the possible learning outcomes, for a specific user. Hence not all documents are relevant to all users. The proposed evaluation measure rewards the results' content for their novelty with respect to what the user already knows and what has been previously proposed. The results are also rewarded for their contribution to achieving the search goals/needs. We demonstrate the efficiency of the measure by comparing it to the knowledge gain reported by 361 crowd-sourced users searching the Web across 10 different topics.

Cite

CITATION STYLE

APA

El Zein, D., & Da Costa Pereira, C. (2022). User’s Knowledge and Information Needs in Information Retrieval Evaluation. In UMAP2022 - Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (pp. 170–178). Association for Computing Machinery, Inc. https://doi.org/10.1145/3503252.3531325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free