Black-Box Adversarial Attacks against Audio Forensics Models

9Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99% attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60% attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16% and guarantees 98% detection accuracy of forensics models.

Cite

CITATION STYLE

APA

Jiang, Y., & Ye, D. (2022). Black-Box Adversarial Attacks against Audio Forensics Models. Security and Communication Networks, 2022. https://doi.org/10.1155/2022/6410478

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free