Learning with Instance Bundles for Reading Comprehension

4Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

Abstract

When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other. However, closely related questions and their corresponding answers are not independent, and leveraging these relationships could provide a strong supervision signal to a model. Drawing on ideas from contrastive estimation, we introduce several new supervision losses that compare question-answer scores across multiple related instances. Specifically, we normalize these scores across various neighborhoods of closely contrasting questions and/or answers, adding a cross entropy loss term in addition to traditional maximum likelihood estimation. Our techniques require bundles of related question-answer pairs, which we either mine from within existing data or create using automated heuristics. We empirically demonstrate the effectiveness of training with instance bundles on two datasets-HotpotQA and ROPES-showing up to 9% absolute gains in accuracy.

Cite

CITATION STYLE

APA

Dua, D., Dasigi, P., Singh, S., & Gardner, M. (2021). Learning with Instance Bundles for Reading Comprehension. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 7347–7357). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.584

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free