Unimodal biometric systems are commonplace nowadays. However, there remains room for performance improvement. Multimodal biometrics, i.e., the combination of more than one biometric modality, is one of the promising remedies; yet, there lie various limitations in deployment, e.g., availability, template management, deployment cost, etc. In this paper, we propose a new notion dubbed Conditional Biometrics representation for flexible biometrics deployment, whereby a biometric modality is utilized to condition another for representation learning. We demonstrate the proposed conditioned representation learning on the face and periocular biometrics via a deep network dubbed the Conditional Biometrics Network. Our proposed Conditional Biometrics Network is a representation extractor for unimodal, multimodal, and cross-modal matching during deployment. Our experimental results on five in-the-wild periocular-face datasets demonstrate that the network outperforms their respective baselines for identification and verification tasks in all deployment scenarios.
CITATION STYLE
Ng, T. S., Low, C. Y., Chai, J. C. L., & Teoh, A. B. J. (2023). On the Representation Learning of Conditional Biometrics for Flexible Deployment. IEEE Access, 11, 82338–82350. https://doi.org/10.1109/ACCESS.2023.3301150
Mendeley helps you to discover research relevant for your work.