Open-domain questions are likely to be open-ended and ambiguous, leading to multiple valid answers. Existing approaches typically adopt the rerank-then-read framework, where a reader reads top-ranking evidence to predict answers. According to our empirical analysis, this framework faces three problems: first, to leverage a large reader under a memory constraint, the reranker should select only a few relevant passages to cover diverse answers, while balancing relevance and diversity is non-trivial; second, the small reading budget prevents the reader from accessing valuable retrieved evidence filtered out by the reranker; third, when using a generative reader to predict answers all at once based on all selected evidence, whether a valid answer will be predicted also pathologically depends on the evidence of some other valid answer(s). To address these issues, we propose to answer open-domain multi-answer questions with a recall-then-verify framework, which separates the reasoning process of each answer so that we can make better use of retrieved evidence while also leveraging large models under the same memory constraint. Our framework achieves state-of-the-art results on two multi-answer datasets, and predicts significantly more gold answers than a rerank-then-read system that uses an oracle reranker.
CITATION STYLE
Shao, Z., & Huang, M. (2022). Answering Open-Domain Multi-Answer Questions via a Recall-then-Verify Framework. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1825–1838). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.128
Mendeley helps you to discover research relevant for your work.