Narrative question answering with cutting-edge open-domain qa techniques: A comprehensive study

N/ACitations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

Recent advancements in open-domain question answering (ODQA), that is, finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets. However, progress in QA over book stories (Book QA) lags despite its similar task formulation to ODQA. This work provides a comprehensive and quantitative analysis about the difficulty of Book QA: (1) We benchmark the research on the NarrativeQA dataset with extensive experiments with cutting-edge ODQA techniques. This quantifies the challenges Book QA poses, as well as advances the published state-of-the-art with a ∼7% absolute improvement on ROUGE-L. (2) We further analyze the detailed challenges in Book QA through human studies.1 Our findings indicate that the event-centric questions dominate this task, which exemplifies the inability of existing QA models to handle event-oriented scenarios.

Cite

CITATION STYLE

APA

Mou, X., Yang, C., Yu, M., Yao, B., Guo, X., Potdar, S., & Su, H. (2021). Narrative question answering with cutting-edge open-domain qa techniques: A comprehensive study. Transactions of the Association for Computational Linguistics, 9, 1032–1046. https://doi.org/10.1162/tacl_a_00411

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free