Abstract
To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just "good enough"in the context of imperfect QA datasets. We explore the use of natural language inference (NLI) as a way to achieve this goal, as NLI inherently requires the premise (document context) to contain all necessary information to support the hypothesis (proposed answer to the question). We leverage large pretrained models and recent prior datasets to construct powerful question conversion and decontextualization modules, which can reformulate QA instances as premise-hypothesis pairs with very high reliability. Then, by combining standard NLI datasets with NLI examples automatically derived from QA training data, we can train NLI models to evaluate QA systems' proposed answers. We show that our approach improves the confidence estimation of a QA model across different domains. Careful manual analysis over the predictions of our NLI model shows that it can further identify cases where the QA model produces the right answer for the wrong reason, i.e., when the answer sentence does not address all aspects of the question.
Cite
CITATION STYLE
Chen, J., Choi, E., & Durrett, G. (2021). Can NLI Models Verify QA Systems’ Predictions? In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 3841–3854). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.324
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.