Evaluating Robustness of AI Models against Adversarial Attacks

23Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently developed adversarial attacks on neural networks have become more aggressive and dangerous, because of which Artificial Intelligence (AI) models are no longer sufficiently robust against them. It is important to have a set of effective and reliable methods to detect malicious attacks to ensure the security of AI models. Such standardized methods can also serve as a reference for researchers to develop robust models and new kinds of attacks. This study proposes a method to assess the robustness of AI models. Six commonly used image classification CNN models were evaluated when subjected to 13 types of adversarial attacks. The robustness of the models is calculated unbiased and can be used as a reference for further improvement. It is distinguished from prior related works that our algorithm is attack-agnostic and is applicable to neural network model.

Cite

CITATION STYLE

APA

Chang, C. L., Hung, J. L., Tien, C. W., Tien, C. W., & Kuo, S. Y. (2020). Evaluating Robustness of AI Models against Adversarial Attacks. In SPAI 2020 - Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligent, Co-located with AsiaCCS 2020 (pp. 47–54). Association for Computing Machinery, Inc. https://doi.org/10.1145/3385003.3410920

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free