Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap

6Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question-context lexical overlap. This hinders QA models from generalizing to underrepresented samples such as questions with low lexical overlap. Question generation (QG), a method for augmenting QA datasets, can be a solution for such performance degradation if QG can properly debias QA datasets. However, we discover that recent neural QG models are biased towards generating questions with high lexical overlap, which can amplify the dataset bias. Moreover, our analysis reveals that data augmentation with these QG models frequently impairs the performance on questions with low lexical overlap, while improving that on questions with high lexical overlap. To address this problem, we use a synonym replacement-based approach to augment questions with low lexical overlap. We demonstrate that the proposed data augmentation approach is simple yet effective to mitigate the degradation problem with only 70k synthetic examples.

References Powered by Scopus

WordNet: A Lexical Database for English

11747Citations
N/AReaders
Get full text

SQuad: 100,000+ questions for machine comprehension of text

4042Citations
N/AReaders
Get full text

Improving neural machine translation models with monolingual data

1651Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering

19Citations
N/AReaders
Get full text

Which Shortcut Solution Do Question Answering Models Prefer to Learn?

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Shinoda, K., Sugawara, S., & Aizawa, A. (2021). Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, MRQA 2021 (pp. 63–72). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrqa-1.6

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 11

61%

Researcher 5

28%

Professor / Associate Prof. 1

6%

Lecturer / Post doc 1

6%

Readers' Discipline

Tooltip

Computer Science 17

74%

Linguistics 4

17%

Neuroscience 1

4%

Social Sciences 1

4%

Save time finding and organizing research with Mendeley

Sign up for free