Information retrieval evaluation heavily relies on human effort to assess the relevance of result documents. Recent years have seen efforts and good progress to reduce the human effort and thus lower the cost of evaluation. Selective labeling strategies carefully choose a subset of result documents to label, for instance, based on their aggregate rank in results; strategies to mitigate incomplete labels seek to make up for missing labels, for instance, predicting them using machine learning methods. How different strategies interact, though, is unknown. In this work, we study the interaction of several state-of-the-art strategies for selective labeling and incomplete label mitigation on four years of TREC Web Track data (2011–2014). Moreover, we propose and evaluate MAXREP as a novel selective labeling strategy, which has been designed so as to select effective training data for missing label prediction.
CITATION STYLE
Hui, K., & Berberich, K. (2015). Selective labeling and incomplete label mitigation for low-cost evaluation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9309, pp. 137–148). Springer Verlag. https://doi.org/10.1007/978-3-319-23826-5_14
Mendeley helps you to discover research relevant for your work.