Abstract
Lacking robustness is a serious problem for Machine Reading Comprehension (MRC) models. To alleviate this problem, one of the most promising ways is to augment the training dataset with sophisticated designed adversarial examples. Generally, those examples are created by rules according to the observed patterns of successful adversarial attacks. Since the types of adversarial examples are innumerable, it is not adequate to manually design and enrich training data to defend against all types of adversarial attacks. In this paper, we propose a novel robust adversarial training approach to improve the robustness of MRC models in a more generic way. Given an MRC model well-trained on the original dataset, our approach dynamically generates adversarial examples based on the parameters of current model and further trains the model by using the generated examples in an iterative schedule. When applied to the state-of-the-art MRC models, including QANET, BERT and ERNIE2.0, our approach obtains significant and comprehensive improvements on 5 adversarial datasets constructed in different ways, without sacrificing the performance on the original SQuAD development set. Moreover, when coupled with other data augmentation strategy, our approach further boosts the overall performance on adversarial datasets and outperforms the state-of-the-art methods.
Cite
CITATION STYLE
Liu, K., Liu, X., Yang, A., Liu, J., Su, J., Li, S., & She, Q. (2020). A robust adversarial training approach to machine reading comprehension. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 8392–8400). AAAI press. https://doi.org/10.1609/aaai.v34i05.6357
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.