Deep Learning Based Gender Identification Using Ear Images

6Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

The classification of an individual as male or female is a significant issue with several practical implications. In recent years, automatic gender identification has garnered considerable interest because of its potential applications in e-commerce and the accumulation of demographic data. Recent observations indicate that models based on deep learning have attained remarkable success in a variety of problem domains. In this study, our aim is to establish an end-to-end model that capitalizes on the strengths of competing convolutional neural network (CNN) and vision transformer (ViT) models. To accomplish this, we propose a novel approach that combines the MobileNetV2 model, which is recognized for having fewer parameters than other CNN models, with the ViT model. Through rigorous evaluations, we have compared our proposed model with other recent studies using the accuracy metric. Our model attained state-of-the-art performance with a remarkable score of 96.66% on the EarVN1.0 dataset, yielding impressive results. In addition, we provide t-SNE results that demonstrate our model’s superior learning representation. Notably, the results show a more effective disentanglement of classes.

Cite

CITATION STYLE

APA

Kılıç, Ş., & Doğan, Y. (2023). Deep Learning Based Gender Identification Using Ear Images. Traitement Du Signal, 40(4), 1629–1639. https://doi.org/10.18280/ts.400431

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free