Improving Distinguishability of Photoplethysmography in Emotion Recognition Using Deep Convolutional Generative Adversarial Networks

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose an emotion recognition framework based on ResNet, bidirectional long- and short-term memory (BiLSTM) modules, and data augmentation using a ResNet deep convolutional generative adversarial network (DCGAN) with photoplethysmography (PPG) signals as input. The emotions identified in this study were classified into two classes (positive and negative) and four classes (neutral, angry, happy, and sad). The framework achieved high recognition rates of 90.34% and 86.32% in two- and four-class emotion recognition tasks, respectively, outperforming other representative methods. Moreover, we show that the ResNet DCGAN module can synthesize samples that do not just look like those in the training set, but also capture discriminative features of the different classes. The distinguishability of the classes was enhanced when these synthetic samples were added to the original samples, which in turn improved the test accuracy of the model when trained using these mixed samples. This effect was evaluated using various quantitative and qualitative methods, including the inception score (IS), Fréchet inception distance (FID), GAN quality index (GQI), linear discriminant analysis (LDA), and Mahalanobis distance (MD).

Cite

CITATION STYLE

APA

Yu, S. N., Wang, S. W., & Chang, Y. P. (2022). Improving Distinguishability of Photoplethysmography in Emotion Recognition Using Deep Convolutional Generative Adversarial Networks. IEEE Access, 10, 119630–119640. https://doi.org/10.1109/ACCESS.2022.3221774

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free