Noise-response analysis of deep neural networks quantifies robustness and fingerprints structural malware

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The ubiquity of deep neural networks (DNNs), cloud-based training, and transfer learning is giving rise to a new cybersecurity frontier in which unsecure DNNs have ‘structural malware’ (i.e., compromised weights and activation pathways). In particular, DNNs can be designed to have backdoors that allow an adversary to easily and reliably fool an image classifier by adding a pattern of pixels called a trigger. It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data). Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, ‘fingerprints’ its nonlinearity, and allows us to detect backdoors (if present). Our approach involves studying how a DNN responds to noise-infused images with varying noise intensity, which we summarize with titration curves. We find that DNNs with backdoors are more sensitive to input noise and respond in a characteristic way that reveals the backdoor and where it leads (its ‘target’). Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus hours).

Cite

CITATION STYLE

APA

Benjamin Erichson, N., Taylor, D., Wu, Q., & Mahoney, M. W. (2021). Noise-response analysis of deep neural networks quantifies robustness and fingerprints structural malware. In SIAM International Conference on Data Mining, SDM 2021 (pp. 100–108). Siam Society. https://doi.org/10.1137/1.9781611976700.12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free