Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts

24Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial machine learning is an area of study that examines both the generation and detection of adversarial examples, which are inputs specially crafted to deceive classifiers, and has been extensively researched specifically in the area of image recognition, where humanly imperceptible modifications are performed on images that cause a classifier to perform incorrect predictions. The main objective of this paper is to study the behavior of multiple state of the art machine learning algorithms in an adversarial context. To perform this study, six different classification algorithms were used on two datasets, NSL-KDD and CICIDS2017, and four adversarial attack techniques were implemented with multiple perturbation magnitudes. Furthermore, the effectiveness of training the models with adversaries to improve recognition is also tested. The results show that adversarial attacks successfully deteriorate the performance of all the classifiers between 13% and 40%, with the Denoising Autoencoder being the technique with highest resilience to attacks.

Cite

CITATION STYLE

APA

Martins, N., Cruz, J. M., Cruz, T., & Abreu, P. H. (2019). Analyzing the Footprint of Classifiers in Adversarial Denial of Service Contexts. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11805 LNAI, pp. 256–267). Springer Verlag. https://doi.org/10.1007/978-3-030-30244-3_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free