On Effectiveness of Adversarial Examples and Defenses for Malware Classification

7Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial neural networks have been successfully used for many different classification tasks including malware detection and distinguishing between malicious and non-malicious programs. Although artificial neural networks perform very well on these tasks, they are also vulnerable to adversarial examples. An adversarial example is a sample that has minor modifications made to it so that the neural network misclassifies it. Many techniques have been proposed, both for crafting adversarial examples and for hardening neural networks against them. Most previous work was done in the image domain. Some of the attacks have been adopted to work in the malware domain which typically deals with binary feature vectors. In order to better understand the space of adversarial examples in malware classification, we study different approaches of crafting adversarial examples and defense techniques in the malware domain and compare their effectiveness on multiple data sets.

Cite

CITATION STYLE

APA

Podschwadt, R., & Takabi, H. (2019). On Effectiveness of Adversarial Examples and Defenses for Malware Classification. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 305 LNICST, pp. 380–393). Springer. https://doi.org/10.1007/978-3-030-37231-6_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free