The methodology for constructing intrusion detection systems and improving existing systems is being actively studied in order to detect harmful data within large-capacity network data. The most common approach is to use AI systems to adapt to unanticipated threats and improve system performance. However, most studies aim to improve performance, and performance-oriented systems tend to be composed of black box models, whose internal working is complex. In the field of security control, analysts strive for interpretation and response based on information from given data, system prediction results, and knowledge. Consequently, performance-oriented systems suffer from a lack of interpretability owing to the lack of system prediction results and internal process information. The recent social climate also demands a responsible system rather than a performance-focused one. This research aims to ensure understanding and interpretation by providing interpretability for AI systems in multiple classification environments that can detect various attacks. In particular, the better the performance, the more complex and less transparent the model and the more limited the area that the analyst can understand, the lower the processing efficiency accordingly. The approach provided in this research is an intrusion detection methodology that uses FOS based on SHAP values to evaluate if the prediction result is suspicious and selects the optimal rule from the transparent model to improve the explanation.
CITATION STYLE
Lee, Y., Lee, E., & Lee, T. (2022). Human-Centered Efficient Explanation on Intrusion Detection Prediction. Electronics (Switzerland), 11(13). https://doi.org/10.3390/electronics11132082
Mendeley helps you to discover research relevant for your work.