Abstract
Deep-learning-based malware-detection models are threatened by adversarial attacks. This paper designs a robust and secure convolutional neural network (CNN) for malware classification. First, three CNNs with different pooling layers, including global average pooling (GAP), global max pooling (GMP), and spatial pyramid pooling (SPP), are proposed. Second, we designed an executable adversarial attack to construct adversarial malware by changing the meaningless and unimportant segments within the Portable Executable (PE) header file. Finally, to consolidate the GMP-based CNN, a header-aware loss algorithm based on the attention mechanism is proposed to defend the executive adversarial attack. The experiments showed that the GMP-based CNN achieved better performance in malware detection than other CNNs with around (Formula presented.) accuracy. However, all CNNs were vulnerable to the executable adversarial attack and a fast gradient-based attack with a (Formula presented.) and (Formula presented.) accuracy decline on average, respectively. Meanwhile, the improved header-aware CNN achieved the best performance with an evasion ratio of less than (Formula presented.).
Author supplied keywords
Cite
CITATION STYLE
Zhang, Y., Jiang, J., Yi, C., Li, H., Min, S., Zuo, R., … Yu, Y. (2024). A Robust CNN for Malware Classification against Executable Adversarial Attack. Electronics (Switzerland), 13(5). https://doi.org/10.3390/electronics13050989
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.