Degrading Detection performance of wireless IDSs through poisoning feature selection

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning algorithms have been increasingly adopted in Intrusion Detection Systems (IDSs) and achieved demonstrable results, but few studies have considered intrinsic vulnerabilities of these algorithms in adversarial environment. In our work, we adopt poisoning attack to influence the accuracy of wireless IDSs that adopt feature selection algorithms. Specifically, we adopt the gradient poisoning method to generate adversarial examples which induce classifier to select a feature subset to make the classification error rate biggest. We consider the box-constrained problem and use Lagrange multiplier and backtracking line search to find the feasible gradient. To evaluate our method, we experimentally demonstrate that our attack method can influence machine learning, including filter and embedded feature selection algorithms using three benchmark network public datasets and a wireless sensor network dataset, i.e., KDD99, NSL-KDD, Kyoto 2006+ and WSN-DS. Our results manifest that gradient poisoning method causes a significant drop in the classification accuracy of IDSs about 20%.

Cite

CITATION STYLE

APA

Dong, Y., Zhu, P., Liu, Q., Chen, Y., & Xun, P. (2018). Degrading Detection performance of wireless IDSs through poisoning feature selection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10874 LNCS, pp. 90–102). Springer Verlag. https://doi.org/10.1007/978-3-319-94268-1_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free