Deep Hashing (DH)-based image retrieval has been widely applied to face-matching systems due to its accuracy and efficiency. However, this convenience comes with an increased risk of privacy leakage. DH models inherit the vulnerability to adversarial attacks, which can be used to prevent the retrieval of private images. Existing adversarial attacks against DH typically target a single image or a specific class of images, lacking universal adversarial perturbation for the entire hash dataset. In this paper, we propose the first universal transferable adversarial perturbation against DH-based facial image retrieval, a single perturbation can protect all images. Specifically, we explore the relationship between clusters learned by different DH models and define the optimization objective of universal perturbation as leaving from the overall hash center. To mitigate the challenge of single-objective optimization, we randomly obtain sub-cluster centers and further propose sub-task-based meta-learning to aid in overall optimization. We test our method with popular facial datasets and DH models, indicating impressive cross-image, -identity, -model, and -scheme universal anti-retrieval performance. Compared to state-of-the-art methods, our performance is competitive in white-box settings and exhibits significant improvements of 10% − 70% in transferability in all black-box settings.
CITATION STYLE
Tang, L., Ye, D., Lv, Y., Chen, C., & Zhang, Y. (2024). Once and for All: Universal Transferable Adversarial Perturbation against Deep Hashing-Based Facial Image Retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 5136–5144). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i6.28319
Mendeley helps you to discover research relevant for your work.