Learning to rank for information retrieval needs some domain experts to label the documents used in the training step. It is costly to label documents for different research areas. In this paper, we propose a novel method which can be used as a cross-domain adaptive model based on importance weighting, a common technique used for correcting the bias or discrepancy. Here we use "cross-domain" to mean that the input distribution is different in the training and testing phases. Firstly, we use Kullback-Leibler Importance Estimation Procedure (KLIEP), a typical method in importance weighing, to do importance estimation. Then we modify AdaRank so that it becomes a transductive model. Experiments on OHSUMED show that our method performs better than some other state-of-the-art methods. © 2011 Springer-Verlag.
CITATION STYLE
Ren, S., Hou, Y., Zhang, P., & Liang, X. (2011). Importance weighted AdaRank. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6838 LNCS, pp. 448–455). https://doi.org/10.1007/978-3-642-24728-6_61
Mendeley helps you to discover research relevant for your work.