Reliable Information Retrieval Systems Performance Evaluation: A Review

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the progressive and availability of various search tools, interest in the evaluation of information retrieval based on user perspective has grown tremendously among researchers. The Information Retrieval System Evaluation is done through Cranfield-paradigm in which the test collections provide the foundation of the evaluation process. The test collections consist of a document corpus, topics, and a set of relevant judgments. The relevant judgments are the documents which retrieved from the test collections based on the topics. The accuracy of the evaluation process is based on the number of relevant documents in the relevance judgment set, called qrels. This paper presents a comprehensive study, which discusses the various ways to improve the number of relevant documents in the qrels to improve the quality of qrels and through that increase the accuracy of the evaluation process. Different ways in which each methodology was performed to retrieve more relevant documents were categorized, described, and analyzed, resulting in an inclusive flow of these methodologies.

Cite

CITATION STYLE

APA

Joseph, M. H., & Ravana, S. D. (2024). Reliable Information Retrieval Systems Performance Evaluation: A Review. IEEE Access, 12, 51740–51751. https://doi.org/10.1109/ACCESS.2024.3377239

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free