Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap

6Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question-context lexical overlap. This hinders QA models from generalizing to underrepresented samples such as questions with low lexical overlap. Question generation (QG), a method for augmenting QA datasets, can be a solution for such performance degradation if QG can properly debias QA datasets. However, we discover that recent neural QG models are biased towards generating questions with high lexical overlap, which can amplify the dataset bias. Moreover, our analysis reveals that data augmentation with these QG models frequently impairs the performance on questions with low lexical overlap, while improving that on questions with high lexical overlap. To address this problem, we use a synonym replacement-based approach to augment questions with low lexical overlap. We demonstrate that the proposed data augmentation approach is simple yet effective to mitigate the degradation problem with only 70k synthetic examples.

Cite

CITATION STYLE

APA

Shinoda, K., Sugawara, S., & Aizawa, A. (2021). Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, MRQA 2021 (pp. 63–72). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrqa-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free