On evaluating query performance predictors

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Query performance prediction (QPP) is to estimate the query difficulty without knowing the relevance assessment information. The quality of predictor is evaluated by the correlation coefficient between the predicted values and actual Average Precision (AP). The Pearson correlation coefficient, Spearman's Rho and Kendall' Tau are the most popular measurements of calculating the correlation coefficient between predicted values and AP. Previous works showed that these methods are not enough equitable and appropriate for evaluating the quality of predictor. In this paper, we add two novel methods, Maximal Information Coefficient (MIC) and Brownian Distance Correlation (Dcor), in evaluating the quality of predictor and compare them with three traditional measurements to observe the differences. We conduct a series of experiments on several standard TREC datasets and analyze the results. The experimental results reveal that MIC and Dcor provide different conclusions in some cases, which offer useful supplements in evaluating the quality of predictor. Furthermore, the sensitivity of diverse methods towards the change of predictors' parameters is distinct in our experiments, and we make some analysis to these differences. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Huang, Y., Luo, T., Wang, X., Hui, K., Wang, W. J., & He, B. (2014). On evaluating query performance predictors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8351 LNCS, pp. 184–194). Springer Verlag. https://doi.org/10.1007/978-3-319-09265-2_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free