We study multi-answer retrieval, an underexplored problem that requires retrieving passages to cover multiple distinct answers for a given question. This task requires joint modeling of retrieved passages, as models should not repeatedly retrieve passages containing the same answer at the cost of missing a different valid answer. In this paper, we introduce JPR, the first joint passage retrieval model for multi-answer retrieval. JPR makes use of an autoregressive reranker that selects a sequence of passages, each conditioned on previously selected passages. JPR is trained to select passages that cover new answers at each timestep and uses a tree-decoding algorithm to enable flexibility in the degree of diversity. Compared to prior approaches, JPR achieves significantly better answer coverage on three multi-answer datasets. When combined with downstream question answering, the improved retrieval enables larger answer generation models since they need to consider fewer passages, establishing a new state-of-the-art.
CITATION STYLE
Min, S., Lee, K., Chang, M. W., Toutanova, K., & Hajishirzi, H. (2021). Joint Passage Ranking for Diverse Multi-Answer Retrieval. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 6997–7008). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.560
Mendeley helps you to discover research relevant for your work.