Assessing Transferability of Adversarial Examples against Malware Detection Classifiers

7Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning (ML) algorithms provide better performance than traditional algorithms in various applications. However, some unknown flaws in ML classifiers make them sensitive to adversarial examples generated by adding small but fooled purposeful distortions to natural examples. This paper aims to investigate the transferability of adversarial examples generated on a sparse and structured dataset and the ability of adversarial training in resisting adversarial examples. The results demonstrate that adversarial examples generated by DNN can fool a set of ML classifiers such as decision tree, random forest, SVM, CNN and RNN. Also, adversarial training can improve the robustness of DNN in terms of resisting attacks.

Cite

CITATION STYLE

APA

Wang, Y., Liu, J., & Chang, X. (2019). Assessing Transferability of Adversarial Examples against Malware Detection Classifiers. In ACM International Conference on Computing Frontiers 2019, CF 2019 - Proceedings (pp. 211–214). Association for Computing Machinery, Inc. https://doi.org/10.1145/3310273.3323072

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free