If a question cannot be answered with the available information, robust systems for question answering (QA) should know not to answer. One way to build QA models that do this is with additional training data comprised of unanswerable questions, created either by employing annotators or through automated methods for unanswerable question generation. To show that the model complexity of existing automated approaches is not justified, we examine a simpler data augmentation method for unanswerable question generation in English: performing antonym and entity swaps on answerable questions. Compared to the prior state-ofthe-art, data generated with our training-free and lightweight strategy results in better models (+1.6 F1 points on SQuAD 2.0 data with BERT-large), and has higher human-judged relatedness and readability. We quantify the raw benefits of our approach compared to no augmentation across multiple encoder models, using different amounts of generated data, and also on TydiQA-MinSpan data (+9.3 F1 points with BERT-large). Our results establish swaps as a simple but strong baseline for future work.
CITATION STYLE
Gautam, V., Zhang, M., & Klakow, D. (2023). A Lightweight Method to Generate Unanswerable Questions in English. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 7349–7360). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.491
Mendeley helps you to discover research relevant for your work.