Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing

12Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While recent face anti-spoofing methods perform well under the intra-domain setups, an effective approach needs to account for much larger appearance variations of images acquired in complex scenes with different sensors for robust performance. In this paper, we present adaptive vision transformers (ViT) for robust cross-domain face anti-spoofing. Specifically, we adopt ViT as a backbone to exploit its strength to account for long-range dependencies among pixels. We further introduce the ensemble adapters module and feature-wise transformation layers in the ViT to adapt to different domains for robust performance with a few samples. Experiments on several benchmark datasets show that the proposed models achieve both robust and competitive performance against the state-of-the-art methods for cross-domain face anti-spoofing using a few samples.

Cite

CITATION STYLE

APA

Huang, H. P., Sun, D., Liu, Y., Chu, W. S., Xiao, T., Yuan, J., … Yang, M. H. (2022). Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13673 LNCS, pp. 37–54). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19778-9_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free