Adversarial Examples in Physical World

51Citations
Citations of this article
351Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although deep neural networks (DNNs) have already made fairly high achievements and a very wide range of impact, their vulnerability attracts lots of interest of researchers towards related studies about artificial intelligence (AI) safety and robustness this year. A series of works reveals that the current DNNs are always misled by elaborately designed adversarial examples. And unfortunately, this peculiarity also affects real-world AI applications and places them at potential risk. we are more interested in physical attacks due to their implementability in the real world. The study of physical attacks can effectively promote the application of AI techniques, which is of great significance to the security development of AI.

Cite

CITATION STYLE

APA

Wang, J. (2021). Adversarial Examples in Physical World. In IJCAI International Joint Conference on Artificial Intelligence (pp. 4925–4926). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/694

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free