Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need

31Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Designing pre-training objectives that more closely resemble the downstream tasks for pre-trained language models can lead to better performance at the fine-tuning stage, especially in the ad-hoc retrieval area. Existing pre-training approaches tailored for IR tried to incorporate weak supervised signals, such as query-likelihood based sampling, to construct pseudo query-document pairs from the raw textual corpus. However, these signals rely heavily on the sampling method. For example, the query likelihood model may lead to much noise in the constructed pre-training data. In this paper, we propose to leverage the large-scale hyperlinks and anchor texts to pre-train the language model for ad-hoc retrieval. Since the anchor texts are created by webmasters and can usually summarize the target document, it can help to build more accurate and reliable pre-training samples than a specific algorithm. Considering different views of the downstream ad-hoc retrieval, we devise four pre-training tasks based on the hyperlinks. We then pre-train the Transformer model to predict the pair-wise preference, jointly with the Masked Language Model objective. Experimental results on two large-scale ad-hoc retrieval datasets show the significant improvement of our model compared with the existing methods.

Cite

CITATION STYLE

APA

Ma, Z., Dou, Z., Xu, W., Zhang, X., Jiang, H., Cao, Z., & Wen, J. R. (2021). Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need. In International Conference on Information and Knowledge Management, Proceedings (pp. 1212–1221). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482286

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free