Current embedding-based large-scale retrieval models are trained with 0-1 hard label that indicates whether a query is relevant to a document, ignoring rich information of the relevance degree. This paper proposes to improve embedding-based retrieval from the perspective of better characterizing the querydocument relevance degree by introducing label enhancement (LE) for the first time. To generate label distribution in the retrieval scenario, we design a novel and effective supervised LE method that incorporates prior knowledge from dynamic term weighting methods into contextual embeddings. Our method significantly outperforms four competitive existing retrieval models and its counterparts equipped with two alternative LE techniques by training models with the generated label distribution as auxiliary supervision information. The superiority can be easily observed on English and Chinese large-scale retrieval tasks under both standard and cold-start settings.
CITATION STYLE
Liu, P., Wang, X., Wang, S., Ye, W., Xi, X., & Zhang, S. (2021). Improving Embedding-based Large-scale Retrieval via Label Enhancement. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 133–142). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.13
Mendeley helps you to discover research relevant for your work.