QUASER: Question Answering with Scalable Extractive Rationalization

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Designing natural language processing (NLP) models that produce predictions by first extracting a set of relevant input sentences, i.e., rationales, is gaining importance for improving model interpretability and producing supporting evidence for users. Current unsupervised approaches are designed to extract rationales that maximize prediction accuracy, which is invariably obtained by exploiting spurious correlations in datasets, and leads to unconvincing rationales. In this paper, we introduce unsupervised generative models to extract dual-purpose rationales, which must not only be able to support a subsequent answer prediction, but also support a reproduction of the input query. We show that such models can produce more meaningful rationales, that are less influenced by dataset artifacts, and as a result, also achieve the state-of-the-art on rationale extraction metrics on four datasets from the ERASER benchmark, significantly improving upon previous unsupervised methods. Our multi-task model is scalable and enables using state-of-the-art pretrained language models to design explainable question answering systems.

Cite

CITATION STYLE

APA

Ghoshal, A., Iyer, S., Paranjape, B., Lakhotia, K., Yih, S. W. T., & Mehdad, Y. (2022). QUASER: Question Answering with Scalable Extractive Rationalization. In SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 1208–1218). Association for Computing Machinery, Inc. https://doi.org/10.1145/3477495.3532049

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free