Explainable AI-based Intrusion Detection in the Internet of Things

30Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The revolution of Artificial Intelligence (AI) has brought about a significant evolution in the landscape of cyberattacks. In particular, with the increasing power and capabilities of AI, cyberattackers can automate tasks, analyze vast amounts of data, and identify vulnerabilities with greater precision. On the other hand, despite the multiple benefits of the Internet of Things (IoT), it raises severe security issues. Therefore, it is evident that the presence of efficient intrusion detection mechanisms is critical. Although Machine Learning (ML) and Deep Learning (DL)-based IDS have already demonstrated their detection efficiency, they still suffer from false alarms and explainability issues that do not allow security administrators to trust them completely compared to conventional signature/specification-based IDS. In light of the aforementioned remarks, in this paper, we introduce an AI-powered IDS with explainability functions for the IoT. The proposed IDS relies on ML and DL methods, while the SHapley Additive exPlanations (SHAP) method is used to explain decision-making. The evaluation results demonstrate the efficiency of the proposed IDS in terms of detection performance and explainable AI (XAI).

Cite

CITATION STYLE

APA

Siganos, M., Radoglou-Grammatikis, P., Kotsiuba, I., Markakis, E., Moscholios, I., Goudos, S., & Sarigiannidis, P. (2023). Explainable AI-based Intrusion Detection in the Internet of Things. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3600160.3605162

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free