Toward Few-step Adversarial Training from a Frequency Perspective

2Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate adversarial-sample generation methods from a frequency domain perspective and extend standard ∞ Projected Gradient Descent (PGD) to the frequency domain. The resulting method, which we call Spectral Projected Gradient Descent (SPGD), has better success rate compared to PGD during early steps of the method. Adversarially training models using SPGD achieves greater adversarial accuracy compared to PGD when holding the number of attack steps constant. The use of SPGD can, therefore, reduce the overhead of adversarial training when utilizing adversarial generation with a smaller number of steps. However, we also prove that SPGD is equivalent to a variant of the PGD ordinarily used for the l∞ threat model. This PGD variant omits the sign function which is ordinarily applied to the gradient. SPGD can, therefore, be performed without explicitly transforming into the frequency domain. Finally, we visualize the perturbations SPGD generates and find they use both high and low-frequency components, which suggests that removing either high-frequency components or low-frequency components is not an effective defense.

Cite

CITATION STYLE

APA

Wang, H. S. H., Cornelius, C., Edwards, B., & Martin, J. (2020). Toward Few-step Adversarial Training from a Frequency Perspective. In SPAI 2020 - Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligent, Co-located with AsiaCCS 2020 (pp. 11–19). Association for Computing Machinery, Inc. https://doi.org/10.1145/3385003.3410922

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free