Abstract
We introduce ART, a new corpus-level autoen-coding approach for training dense retrieval models that does not require any labeled training data. Dense retrieval is a central challenge for open-domain tasks, such as Open QA, where state-of-the-art methods typically require large supervised datasets with custom hard-negative mining and denoising of positive examples. ART, in contrast, only requires access to unpaired inputs and outputs (e.g., questions and potential answer passages). It uses a new passage-retrieval autoencoding scheme, where (1) an input question is used to retrieve a set of evidence passages, and (2) the passages are then used to compute the probability of reconstructing the original question. Training for retrieval based on question reconstruction enables effective unsupervised learning of both passage and question encoders, which can be later incorporated into complete Open QA systems without any further finetuning. Extensive experiments demonstrate that ART obtains state-of-the-art results on multiple QA retrieval benchmarks with only generic initialization from a pre-trained language model, removing the need for labeled data and task-specific losses.1.
Cite
CITATION STYLE
Sachan, D. S., Lewis, M., Yogatama, D., Zettlemoyer, L., Pineau, J., & Zaheer, M. (2023). Questions Are All You Need to Train a Dense Passage Retriever. Transactions of the Association for Computational Linguistics, 11, 600–616. https://doi.org/10.1162/tacl_a_00564
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.