Steering Data Arbitration on Facial-Speech Features for Fusion-Based Emotion Recognition Framework

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotion recognition is one of the computationally complex tasks with a spectrum of real-world applications. In this view, recent years’ range of potential strategies is designed based on monolithic learning primarily over a single data modality data source. Though, emotive analytics asserted the capacity of inclusion of additional data modalities for the multifaceted emotion recognition task, with an improved recognition rate. With the evidence of a fusion-based learning strategy, the feature set of multimodal data may be harnessed in an adaptive fusion-based emotion recognition framework. We proposed a fusion-based framework using speech and image features of the reference object for an improved emotion recognition strategy. The role of data arbitration to steer the learning and recognition is highlighted and asserted, with an implicit capacity to handle heterogeneity at learning model and data modality, achieved accuracy equivalent to humanize, e.g., 90.32% recognition rate.

Cite

CITATION STYLE

APA

Singh, V., & Singh, K. (2023). Steering Data Arbitration on Facial-Speech Features for Fusion-Based Emotion Recognition Framework. In Lecture Notes in Networks and Systems (Vol. 693 LNNS, pp. 103–119). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-3243-6_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free