Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods face a lack of theoretical foundations and interpretable solutions. Most defense methods are based on experience with the characteristics of previous attacks, but fail to defend against new attacks. In this paper, we propose IBD, an interpretable backdoor-detection method via multivariate interactions. Using information theory techniques, IBD reveals how the backdoor works from the perspective of multivariate interactions of features. Based on the interpretable theorem, IBD enables defenders to detect backdoor models and poisoned examples without introducing additional information about the specific attack method. Experiments on widely used datasets and models show that IBD achieves a (Formula presented.) increase in average in detection accuracy and an order-of-magnitude reduction in time cost compared with existing backdoor-detection methods.
CITATION STYLE
Xu, Y., Liu, X., Ding, K., & Xin, B. (2022). IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions. Sensors, 22(22). https://doi.org/10.3390/s22228697
Mendeley helps you to discover research relevant for your work.