Learning Invariant Representation Improves Robustness for MRC Models

1Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The prosperity of Pretrained Language Models (PLM) has greatly promoted the development of Machine reading comprehension (MRC). However, these models are vulnerable and not robust to adversarial examples. In this paper, we propose Stable and Contrastive Question Answering (SCQA) to improve invariance of model representation to alleviate these robustness issues. Specifically, we first construct positive example pairs which have same answer through data augmentation. Then SCQA trains enhanced representations with better alignment between positive pairs by introducing stability and contrastive loss. Experimental results show that our approach can boost the robustness of QA models cross different MRC tasks and attack sets significantly and consistently.

Cite

CITATION STYLE

APA

Yu, H., Wen, L., Meng, H., Liu, T., & Wang, H. (2022). Learning Invariant Representation Improves Robustness for MRC Models. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 3306–3314). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.479

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free