Feature fusion algorithm for multimodal emotion recognition from speech and facial expression signal

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN). Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

Cite

CITATION STYLE

APA

Zhiyan, H., & Jian, W. (2016). Feature fusion algorithm for multimodal emotion recognition from speech and facial expression signal. In MATEC Web of Conferences (Vol. 61). EDP Sciences. https://doi.org/10.1051/matecconf/20166103012

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free