Robust Weight Perturbation for Adversarial Training

11Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Overfitting widely exists in adversarial robust training of deep networks. An effective remedy is adversarial weight perturbation, which injects the worst-case weight perturbation during network training by maximizing the classification loss on adversarial examples. Adversarial weight perturbation helps reduce the robust generalization gap; however, it also undermines the robustness improvement. A criterion that regulates the weight perturbation is therefore crucial for adversarial training. In this paper, we propose such a criterion, namely Loss Stationary Condition (LSC) for constrained perturbation. With LSC, we find that it is essential to conduct weight perturbation on adversarial data with small classification loss to eliminate robust overfitting. Weight perturbation on adversarial data with large classification loss is not necessary and may even lead to poor robustness. Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation. The perturbation strategy prevents deep networks from overfitting while avoiding the side effect of excessive weight perturbation, significantly improving the robustness of adversarial training. Extensive experiments demonstrate the superiority of the proposed method over the state-of-the-art adversarial training methods.

Cite

CITATION STYLE

APA

Yu, C., Han, B., Gong, M., Shen, L., Ge, S., Bo, D., & Liu, T. (2022). Robust Weight Perturbation for Adversarial Training. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3688–3694). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/512

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free