Abstract
Deep neural network (DNN) models are widely used for inference in many application scenarios. DNN accelerators are not designed with security in mind, but for higher performance and lower energy consumption. Hence, they are suffering from the security risk of being attacked. The insecure design flaws of existing DNN accelerators can be exploited to recover the structure of DNN model from the plain instructions, thus the runtime environment can be controlled to obtain the weights of DNN model. Furthermore, the structure of DNN model running on the accelerator is acquired by the side channel information and interrupt status register. To protect general DNN accelerator from being attacked by model inversion attack, this paper proposes a secure and general architecture called NPUFort, which guarantees the confidentiality of the parameters of DNN model and mitigates side-channel information leakage. The experimental results demonstrate the feasibility and effectiveness of the secure architecture of DNN accelerators with negligible performance overhead.
Author supplied keywords
Cite
CITATION STYLE
Wang, X., Hou, R., Zhu, Y., Zhang, J., & Meng, D. (2019). NPUFort: A secure architecture of DNN accelerator against model inversion attack. In ACM International Conference on Computing Frontiers 2019, CF 2019 - Proceedings (pp. 190–196). Association for Computing Machinery, Inc. https://doi.org/10.1145/3310273.3323070
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.