Multi-modal region based convolution neural network (MM-RCNN) for ethnicity identification and classification

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human facial images help to acquire the demographic information of the person like ethnicity and gender. At the same time, the ethnicity and gender acts as a significant part in the face-related applications. In this study, image-based ethnicity identification problem is considered as a classification problem and is solved by deep learning techniques. In this paper, a new multi-modal region based convolutional neural network (MM-RCNN) is proposed for the detection and classification of Ethnicity to determine the age, gender, emotion, ethnicity and so on. The presented model involves two stages namely feature extraction and classification. In the first stage, an efficient feature extraction model called ImageAnnot is developed for extracting the useful features from an image. In the second stage, MM-RCNN is employed to identify and then classify ethnicity. To validate the effective performance of the applied MM-RCNN model, various evaluation parameters has been presented and the simulation outcome verified the superior nature of the presented model compared to existing models.

Cite

CITATION STYLE

APA

Christy, C., Arivalagan, S., & Sudhakar, P. (2019). Multi-modal region based convolution neural network (MM-RCNN) for ethnicity identification and classification. International Journal of Innovative Technology and Exploring Engineering, 8(11), 1168–1176. https://doi.org/10.35940/ijitee.J9102.0981119

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free