Employing fusion of learned and handcrafted features for unconstrained ear recognition

84Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

The authors present an unconstrained ear recognition framework that outperforms state-of-the-art systems in different publicly available image databases. To this end, they developed convolutional neural network (CNN)-based solutions for ear normalisation and description, they used well-known handcrafted descriptors, and they fused learned and handcrafted features to improve recognition. They designed a two-stage landmark detector that successfully worked under untrained scenarios. They used the results generated to perform a geometric image normalisation that boosted the performance of all evaluated descriptors. The proposed CNN descriptor outperformed other CNN-based works in the literature, especially in more challenging scenarios. The fusion of learned and handcrafted matchers appears to be complementary and achieved the best performance in all experiments. The obtained results outperformed all other reported results for the Unconstrained Ear Recognition Challenge, which contains the most difficult database nowadays.

Cite

CITATION STYLE

APA

Hansley, E. E., Segundo, M. P., & Sarkar, S. (2018). Employing fusion of learned and handcrafted features for unconstrained ear recognition. IET Biometrics, 7(3), 215–223. https://doi.org/10.1049/iet-bmt.2017.0210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free