A SIFT-based feature level fusion of iris and ear biometrics

10Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To overcome the drawbacks encountoured in unimodal biometric systems for person authentification, multimodal biometrics methods are needed. This paper presents an efficient feature level fusion of iris and ear images using SIFT descriptors which extract the iris and ear features separetely. Then, these features are incorporated in a single feature vector called fused template. The generated template is enrolled in the database, then the matching of SIFT features of iris and ear input images and the enrolled template of the claiming user is computed using Euclidean distance. The proposed method has been applied on a synthetic multimodal biometrics database. The latter is produced from Casia and USTB 2 databases wich represent iris and ear image sets respectively. As the performance evaluation of the proposed method we compute the false rejection rate (FRR), the false acceptance rate (FAR) and accuracy measures. From the obtained results, we can say that the fusion at feature level outperforms iris and ear authentification systems taken separately.

Cite

CITATION STYLE

APA

Ghoualmi, L., Chikhi, S., & Draa, A. (2015). A SIFT-based feature level fusion of iris and ear biometrics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8869, pp. 102–112). Springer Verlag. https://doi.org/10.1007/978-3-319-14899-1_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free