Abstract
Neural network models have shown state of the art performance inseveral applications. However it has been observed that they aresusceptible to adversarial attacks: small perturbations to the inputthat fool a network model into mislabelling the input data. Theseattacks can also transfer from one network model to another, whichraises concerns over their applicability, particularly when there areprivacy and security risks involved. In this work, we conduct a studyto analyze the effect of network architectures and weight initial-ization on the robustness of individual network models as well astransferability of adversarial attacks. Experimental results demon-strate that while weight initialization has no affect on the robustnessof a network model, it does have an affect on attack transferabilityto a network model. Results also show that the complexity of anetwork model as indicated by the total number of parameters andMAC number is not indicative of a network’s robustness to attackor transferability, but accuracy can be; within the same architec-ture, higher accuracy usually indicates a more robust network, butacross architectures there is no strong link between accuracy androbustness.
Cite
CITATION STYLE
Ben Daya, I., Shaifee, M. J., Karg, M., Scharfenderger, C., & Wong, A. (2018). On Robustness of Deep Neural Networks: A Comprehensive Study on the Effect of Architecture and Weight Initialization to Susceptibility and Transferability of Adversarial Attacks. Journal of Computational Vision and Imaging Systems, 4(1), 3. https://doi.org/10.15353/jcvis.v4i1.329
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.