DoWe Know WhatWe Don't Know? Studying Unanswerable Questions beyond SQuAD 2.0

12Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Understanding when a text snippet does not provide a sought after information is an essential part of natural language understanding. Recent work (SQuAD 2.0, Rajpurkar et al., 2018) has attempted to make some progress in this direction by enriching the SQuAD dataset for the Extractive QA task with unanswerable questions. However, as we show, the performance of a top system trained on SQuAD 2.0 drops considerably in out-of-domain scenarios, limiting its use in practical situations. In order to study this we build an out-of-domain corpus, focusing on simple event-based questions and distinguish between two types of IDK questions: competitive questions, where the context includes an entity of the same type as the expected answer, and simpler, noncompetitive questions where there is no entity of the same type in the context. We find that SQuAD 2.0-based models fail even in the case of the simpler questions. We then analyze the similarities and differences between the IDK phenomenon in Extractive QA and the Recognizing Textual Entailments task (RTE, Dagan et al., 2013) and investigate the extent to which the latter can be used to improve the performance.

Cite

CITATION STYLE

APA

Sulem, E., Hay, J., & Roth, D. (2021). DoWe Know WhatWe Don’t Know? Studying Unanswerable Questions beyond SQuAD 2.0. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4543–4548). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.385

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free