Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

87Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples. However, existing adversarial examples against face recognition systems either lack transferability to black-box models, or fail to be implemented in practice. In this paper, we propose a unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under the black-box setting. Adv-Makeup develops a task-driven makeup generation method with the blending module to synthesize imperceptible eye shadow over the orbital region on faces. And to achieve transferability, Adv-Makeup implements a fine-grained meta-learning based adversarial attack strategy to learn more vulnerable or sensitive features from various models. Compared to existing techniques, sufficient visualization results demonstrate that Adv-Makeup is capable to generate much more imperceptible attacks under both digital and physical scenarios. Meanwhile, extensive quantitative experiments show that Adv-Makeup can significantly improve the attack success rate under black-box setting, even attacking commercial systems.

Cite

CITATION STYLE

APA

Yin, B., Wang, W., Yao, T., Guo, J., Kong, Z., Ding, S., … Liu, C. (2021). Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1252–1258). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/173

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free