In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions). While several defense techniques have been proposed, and adapted, to the discrete nature of text adversarial attacks, the benefits of general-purpose regularization methods such as label smoothing for language models, have not been studied. In this paper, we study the adversarial robustness provided by label smoothing strategies in foundational models for diverse NLP tasks in both in-domain and out-of-domain settings. Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks. We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.

Cite

CITATION STYLE

APA

Yang, Y., Dan, S., Roth, D., & Lee, I. (2023). In and Out-of-Domain Text Adversarial Robustness via Label Smoothing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 657–669). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free