The evaluation of an information retrieval system involves the judgement of document relevance, done by independent assessors, without user involvement. The TREC interactive task requires experimental subjects to save relevant documents, which cover different aspects of the topic within a specified time limit. The evaluation of the experiment the rate of agreement between the assessors is consistent with those reported for similar multi-assessor experiments.
CITATION STYLE
Wu, M., Fuller, M., & Wilkinson, R. (2000). Role of a judge in a user based retrieval experiment. SIGIR Forum (ACM Special Interest Group on Information Retrieval), 331–333.
Mendeley helps you to discover research relevant for your work.