A Robust Multi-unit Feature-Level Fusion Framework for Iris Biometrics

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, an iris biometric framework based on feature-level multi-unit fusion, using both left and right iris images, is proposed. The fusion helps by taking advantage of the two iris units of an individual. For both the units, segmentation is performed using an improved iris segmentation methodology (ISM) precisely localizes iris region of interest (ROI) and processes noise factors like occlusions, specular reflections, off-axis and blurring in reduced search space with low computational time, from the input eye image. Feature extraction is performed using first-order and second-order statistical measures that accurately characterize the unique textural patterns from the localized iris ROI without converting from the polar to the cartesian space. The statistical features obtained are then fused using sum method and mean method of fusion. Back-propagation neural networks are used for classification. Experimental results tested on iris images acquired datasets like CASIA V3-Interval, CASIA V3-Lamp, MMU V1 and MMU V2 iris datasets show significant performance improvement in mean rule-based multi-unit feature-level fusion system when compared to single modal systems and sum rule-based multi-unit feature-level fusion system.

Cite

CITATION STYLE

APA

Alice Nithya, A., Ferni Ukrit, M., & Femilda Josephin, J. S. (2020). A Robust Multi-unit Feature-Level Fusion Framework for Iris Biometrics. In Advances in Intelligent Systems and Computing (Vol. 1056, pp. 239–246). Springer. https://doi.org/10.1007/978-981-15-0199-9_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free