A GAN-Based Data Augmentation Method for Multimodal Emotion Recognition

19Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The lack of training data is an obstacle to build satisfactory multimodal emotion recognition models. Generative adversarial network (GAN) has recently shown great successes in generating realistic-like data. In this paper, we propose a GAN-based data augmentation method for enhancing the performance of multimodal emotion recognition models. We adopt conditional Boundary Equilibrium GAN (cBEGAN) to generate artificial differential entropy features of electroencephalography signal, eye movement data and their direct concatenations. The main advantage of cBEGAN is that it can overcome the instability of conventional GAN and has very quick converge speed. We evaluate our proposed method on two multimodal emotion datasets. The experimental results demonstrate that our proposed method achieves 4.6% and 8.9% improvements of mean accuracies on classifying three and five emotions, respectively.

Cite

CITATION STYLE

APA

Luo, Y., Zhu, L. Z., & Lu, B. L. (2019). A GAN-Based Data Augmentation Method for Multimodal Emotion Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11554 LNCS, pp. 141–150). Springer Verlag. https://doi.org/10.1007/978-3-030-22796-8_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free