Retrieval Data Augmentation Informed by Downstream Question Answering Performance

2Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Training retrieval models to fetch contexts for Question Answering (QA) over large corpora requires labeling relevant passages in those corpora. Since obtaining exhaustive manual annotations of all relevant passages is not feasible, prior work uses text overlap heuristics to find passages that are likely to contain the answer, but this is not feasible when the task requires deeper reasoning and answers are not extractable spans (e.g.: multi-hop, discrete reasoning). We address this issue by identifying relevant passages based on whether they are useful for a trained QA model to arrive at the correct answers, and develop a search process guided by the QA model’s loss. Our experiments show that this approach enables identifying relevant context for unseen data greater than 90% of the time on the IIRC dataset and generalizes better to the end QA task than those trained on just the gold retrieval data on IIRC and QASC datasets.

Cite

CITATION STYLE

APA

Ferguson, J., Dasigi, P., Khot, T., & Hajishirzi, H. (2022). Retrieval Data Augmentation Informed by Downstream Question Answering Performance. In FEVER 2022 - 5th Fact Extraction and VERification Workshop, Proceedings of the Workshop (pp. 1–5). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.fever-1.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free