Improving User Identification Accuracy in Facial and Voice Based Mood Analytics using Fused Feature Extraction

N/ACitations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

User identification involves a lot of complex procedures including image processing, voice processing, biometric data processing and other user specific parameters. This can be applied to various fields including but not limited to smartphone authentication, bank transactions, location based identity access and various others areas. In this work, we present a novel approach for uniquely identifying users based on their facial and voice data. Our approach uses an intelligent and adaptive combination of facial geometry and mel frequency analysis (via Mel Frequency Cepstral Co-efficient or MFCC) of user voice data, in order to generate a mood based personality profile which is almost unique for each user. Combination of these features is given to a machine learning based classifier, which has proven to produce more than 90% accuracy with a false positive rate of less than 7%. We have also compared the proposed approach with various other standard implementations and observed that our implementation produces better results than most of them under real time conditions.

Cite

CITATION STYLE

APA

Improving User Identification Accuracy in Facial and Voice Based Mood Analytics using Fused Feature Extraction. (2019). International Journal of Innovative Technology and Exploring Engineering, 9(2S3), 490–494. https://doi.org/10.35940/ijitee.b1118.1292s319

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free