On the Defense of Spoofing Countermeasures Against Adversarial Attacks

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Advances in speech synthesis have exposed the vulnerability of spoofing countermeasure (CM) systems. Adversarial attacks exacerbate this problem, mainly due to the reliance of most CM models on deep neural networks. While research on adversarial attacks in anti-spoofing systems has received considerable attention, there is a relative scarcity of studies focused on developing effective defense techniques. In this study, we propose a defense strategy against such attacks by augmenting training data with frequency band-pass filtering and denoising. Our approach aims to limit the impact of perturbation, thereby reducing the susceptibility to adversarial samples. Furthermore, our findings reveal that the use of Max-Feature-Map (MFM) and frequency band-pass filtering provides additional benefits in suppressing different noise types. To empirically validate this hypothesis, we conduct tests on different CM models using adversarial samples derived from the ASVspoof challenge and other well-known datasets. The evaluation results show that such defense mechanisms can potentially enhance the performance of spoofing countermeasure systems.

Cite

CITATION STYLE

APA

Nguyen-Vu, L., Doan, T. P., Bui, M., Hong, K., & Jung, S. (2023). On the Defense of Spoofing Countermeasures Against Adversarial Attacks. IEEE Access, 11, 94563–94574. https://doi.org/10.1109/ACCESS.2023.3310809

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free