Adversarial deep learning for robust detection of binary encoded malware

163Citations
Citations of this article
208Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Malware is constantly adapting in order to avoid detection. Model-based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which characterizes the features of malware. A specific extra challenge of malware is that the adversarial examples must be generated in a way that preserves their malicious functionality. We introduce methods capable of generating functionally preserved adversarial malware examples in the binary domain. Using the saddle-point formulation, we incorporate the adversarial examples into the training of models that are robust to them. We evaluate the effectiveness of the methods and others in the literature on a set of Portable Execution (PE) files. Comparison prompts our introduction of an online measure computed during training to assess general expectation of robustness.

Cite

CITATION STYLE

APA

Al-Dujaili, A., Huang, A., Hemberg, E., & O’Reilly, U. M. (2018). Adversarial deep learning for robust detection of binary encoded malware. In Proceedings - 2018 IEEE Symposium on Security and Privacy Workshops, SPW 2018 (pp. 76–82). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/SPW.2018.00020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free