Deep neural networks (DNNs) have been applied to various industries. In particular, DNNs on embedded devices have attracted considerable interest be-cause they allow real-time and distributed processing on site. However, adversarial examples (AEs), which add small perturbations to the input data of DNNs to cause misclassification, are serious threats to DNNs. In this paper, a novel black-box attack is proposed to craft AEs based only on processing time, i.e., the side-channel leaks from DNNs on embedded devices. Unlike several existing black-box attacks that utilize output probability, the proposed attack exploits the relationship between the number of activated nodes and processing time without using training data, model architecture, parameters, substitute models, or output probability. The perturbations for AEs are determined by the differential processing time based on the input data of the DNNs in the proposed attack. The experimental results show that the AEs of the proposed attack effectively cause an increase in the number of activated nodes and the misclassification of one of the incorrect labels against the DNNs on a microcontroller unit. Moreover, these results indicate that the attack can evade gradient-masking and confidence reduction countermeasures, which conceal the output probability, to prevent the crafting of AEs against several black-box attacks. Finally, the coun-termeasures against the attack are implemented and evaluated to clarify that the implementation of an activation function with data-dependent timing leaks is the cause of the proposed attack.
CITATION STYLE
Nakai, T., Suzuki, D., & Fujino, T. (2021). Timing black-box attacks: Crafting adversarial examples through timing leaks against dnns on embedded devices. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2021(3), 149–175. https://doi.org/10.46586/tches.v2021.i3.149-175
Mendeley helps you to discover research relevant for your work.