Robust audio adversarial example for a physical attack

69Citations
Citations of this article
119Readers
Mendeley users who have this article in their library.

Abstract

We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not able to perform such a physical attack because of reverberation and noise from playback environments. In contrast, our method obtains robust adversarial examples by simulating transformations caused by playback or recording in the physical world and incorporating the transformations into the generation process. Evaluation and a listening experiment demonstrated that our adversarial examples are able to attack without being noticed by humans. This result suggests that audio adversarial examples generated by the proposed method may become a real threat.

Cite

CITATION STYLE

APA

Yakura, H., & Sakuma, J. (2019). Robust audio adversarial example for a physical attack. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 5334–5341). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/741

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free