We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources. The key novelty of our method is the introduction of the intermediary modules into the current retriever-reader pipeline. Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set. Specifically, our method links the retrieved evidence with its related global context into graphs and organizes them into a candidate list of evidence chains. Built upon pretrained language models, our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia. In particular, our model substantially outperforms the previous state-of-the-art on OTT-QA with an exact match score of 47.3 (45 % relative gain).
CITATION STYLE
Ma, K., Cheng, H., Liu, X., Nyberg, E., & Gao, J. (2022). Open-domain Question Answering via Chain of Reasoning over Heterogeneous Knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5389–5403). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.392
Mendeley helps you to discover research relevant for your work.