Abstract
Although advances in neural architectures for NLP problems and unsupervised pre-training led to impressive improvements on question answering and natural language inference, reasoning over long texts still poses a great challenge. Here, we consider the task of question answering from full narratives (e.g., books or movie scripts), or their summaries, tackling the NarrativeQA challenge (NQA; Kocisky et al. (2018)). We introduce a heuristic extractive version of the data set, which allows us to approach the more feasible problem of answer extraction (rather than generation). We develop models for passage retrieval and answer span prediction using this data set. We use pre-trained BERT embeddings for injecting prior knowledge into our system. We show that our setup leads to state of the art performance on summary-level QA. On narrativelevel QA, our model performs competitively on the METEOR metric. We analyze the relative contributions of BERT embeddings and the extractive model setup, and provide a detailed error analysis.
Cite
CITATION STYLE
Frermann, L. (2019). Extractive narrativeqa with heuristic pre-training. In MRQA@EMNLP 2019 - Proceedings of the 2nd Workshop on Machine Reading for Question Answering (pp. 172–182). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-5823
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.