Improving Model Robustness through Hybrid Adversarial Training: Integrating FGSM and PGD Methods

  • Zhong Z
N/ACitations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Abstract. With the widespread use of deep learning models in various applications. People are gradually realizing the vulnerability of these models to adversarial attacks. Adversarial training is an effective strategy to defend against adversarial attacks. Based on the advantages and disadvantages of the current mainstream Fast Gradient Sign Method (FGSM) adversarial training and Projected Gradient Descent (PGD) adversarial training, this paper proposes a hybrid adversarial training that integrates FGSM and PGD methods and uses the ResNet-18 model and SVHN dataset for testing. Experimental results show that hybrid adversarial training can effectively reduce training time. Its accuracy on the original data set is higher than that of PGD adversarial training, which is improved by about 2%. The performance when facing FGSM attacks is almost the same as that of single FGSM adversarial training. The performance when facing PGD attacks decreases more significantly, which is about 2% to 3% lower than that of PGD adversarial training. This study not only helps to understand the robustness of hybrid adversarial training to models facing adversarial attacks but also helps in studying new adversarial training strategies.

Cite

CITATION STYLE

APA

Zhong, Z. (2024). Improving Model Robustness through Hybrid Adversarial Training: Integrating FGSM and PGD Methods. Applied and Computational Engineering, 109(1), 57–62. https://doi.org/10.54254/2755-2721/109/20241413

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free