Getting closer to AI complete question answering: A set of prerequisite real tasks

93Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

The recent explosion in question answering research produced a wealth of both factoid reading comprehension (RC) and commonsense reasoning datasets. Combining them presents a different kind of task: deciding not simply whether information is present in the text, but also whether a confident guess could be made for the missing information. We present QuAIL, the first RC dataset to combine text-based, world knowledge and unanswerable questions, and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given QA system. QuAIL contains 15K multi-choice questions for 800 texts in 4 domains. Crucially, it offers both general and text-specific questions, unlikely to be found in pretraining data. We show that QuAIL poses substantial challenges to the current state-ofthe-art systems, with a 30% drop in accuracy compared to the most similar existing dataset.

Cite

CITATION STYLE

APA

Rogers, A., Kovaleva, O., Downey, M., & Rumshisky, A. (2020). Getting closer to AI complete question answering: A set of prerequisite real tasks. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 8722–8731). AAAI press. https://doi.org/10.1609/aaai.v34i05.6398

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free