Question answering with long multiple-span answers

70Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

Answering questions in many real-world applications often requires complex and precise information excerpted from texts spanned across a long document. However, currently no such annotated dataset is publicly available, which hinders the development of neural question-answering (QA) systems. To this end, we present MASH-QA1, a Multiple Answer Spans Healthcare Question Answering dataset from the consumer health domain, where answers may need to be excerpted from multiple, nonconsecutive parts of text spanned across a long document. We also propose MultiCo, a neural architecture that is able to capture the relevance among multiple answer spans, by using a query-based contextualized sentence selection approach, for forming the answer to the given question. We also demonstrate that conventional QA models are not suitable for this type of task and perform poorly in this setting. Extensive experiments are conducted, and the experimental results confirm the proposed model significantly outperforms the state-of-the-art QA models in this multi-span QA setting.

Cite

CITATION STYLE

APA

Zhu, M., Ahuja, A., Juan, D. C., Wei, W., & Reddy, C. K. (2020). Question answering with long multiple-span answers. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 3840–3849). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.342

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free