Anomaly-based In-Vehicle Intrusion Detection System (IV-IDS) is one of the protection mechanisms to detect cyber attacks on automotive vehicles. Using artificial intelligence (AI) for anomaly detection to thwart cyber attacks is promising but suffers from generating false alarms and making decisions that are hard to interpret. Consequently, this issue leads to uncertainty and distrust towards such IDS design unless it can explain its behavior, e.g., by using eXplainable AI (XAI). In this paper, we consider the XAI-powered design of such an IV-IDS using CAN bus data from a public dataset, named 'Survival'. Novel features are engineered, and a Deep Neural Network (DNN) is trained over the dataset. A visualization-based explanation, 'VisExp', is created to explain the behavior of the AI-based IV-IDS, which is evaluated by experts in a survey, in relation to a rule-based explanation. Our results show that experts' trust in the AI-based IV-IDS is significantly increased when they are provided with VisExp (more so than the rule-based explanation). These findings confirm the effect, and by extension the need, of explainability in automated systems, and VisExp, being a source of increased explainability, shows promise in helping involved parties gain trust in such systems.
CITATION STYLE
Lundberg, H., Mowla, N. I., Abedin, S. F., Thar, K., Mahmood, A., Gidlund, M., & Raza, S. (2022). Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System Using eXplainable Artificial Intelligence (XAI). IEEE Access, 10, 102831–102841. https://doi.org/10.1109/ACCESS.2022.3208573
Mendeley helps you to discover research relevant for your work.