Resilience against Adversarial Examples: Data-Augmentation Exploiting Generative Adversarial Networks

5Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Recently, malware classification based on Deep Neural Networks (DNN) has gained significant attention due to the rise in popularity of artificial intelligence (AI). DNN-based malware classifiers are a novel solution to combat never-before-seen malware families because this approach is able to classify malwares based on structural characteristics rather than requiring particular signatures like traditional malware classifiers. However, these DNN-based classifiers have been found to lack robustness against malwares that are carefully crafted to evade detection. These specially crafted pieces of malware are referred to as adversarial examples. We consider a clever adversary who has a thorough knowledge of DNN-based malware classifiers and will exploit it to generate a crafty malware to fool DNN-based classifiers. In this paper, we propose a DNN-based malware classifier that becomes resilient to these kinds of attacks by exploiting Generative Adversarial Network (GAN) based data augmentation. The experimental results show that the proposed scheme classifies malware, including AEs, with a false positive rate (FPR) of 3.0% and a balanced accuracy of 70.16%. These are respective 26.1% and 18.5% enhancements when compared to a traditional DNN-based classifier that does not exploit GAN.

Cite

CITATION STYLE

APA

Kang, M., Kim, H. K., Lee, S., & Han, S. (2021). Resilience against Adversarial Examples: Data-Augmentation Exploiting Generative Adversarial Networks. KSII Transactions on Internet and Information Systems, 15(11), 4105–4121. https://doi.org/10.3837/TIIS.2021.11.013

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free