Beyond [CLS] through ranking by generation

38Citations
Citations of this article
104Readers
Mendeley users who have this article in their library.

Abstract

Generative models for Information Retrieval, where ranking of documents is viewed as the task of generating a query from a document's language model, were very successful in various IR tasks in the past. However, with the advent of modern deep neural networks, attention has shifted to discriminative ranking functions that model the semantic similarity of documents and queries instead. Recently, deep generative models such as GPT2 and BART have been shown to be excellent text generators, but their effectiveness as rankers have not been demonstrated yet. In this work, we revisit the generative framework for information retrieval and show that our generative approaches are as effective as state-of-the-art semantic similarity-based discriminative models for the answer selection task. Additionally, we demonstrate the effectiveness of unlikelihood losses for IR.

Cite

CITATION STYLE

APA

dos Santos, C. N., Ma, X., Nallapati, R., Huang, Z., & Xiang, B. (2020). Beyond [CLS] through ranking by generation. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1722–1727). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free