A cross-culture study on multimodal emotion recognition using deep learning

6Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we aim to investigate the similarities and differences of multimodal signals between Chinese and French on three emotions recognition task using deep learning. We use videos including positive, neutral and negative emotions as stimuli material. Both Chinese and French subjects wear electrode caps and eye tracking glass while doing experiments to collect electroencephalography (EEG) and eye movement data. To deal with the problem of lacking data for training deep neural networks, conditional Wasserstein generative adversarial network is adopted to generate EEG and eye movement data. The EEG and eye movement features are fused by using Deep Canonical Correlation Analysis to analyze the relationship between EEG and eye movement data. Our experimental results show that French has higher classification accuracy on beta frequency band while Chinese performs better on gamma frequency band. In addition, EEG signals and eye movement data of French participants have complementary characteristics in discriminating positive and negative emotions.

Cite

CITATION STYLE

APA

Gan, L., Liu, W., Luo, Y., Wu, X., & Lu, B. L. (2019). A cross-culture study on multimodal emotion recognition using deep learning. In Communications in Computer and Information Science (Vol. 1142 CCIS, pp. 670–680). Springer. https://doi.org/10.1007/978-3-030-36808-1_73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free