SigRep: Toward Robust Wearable Emotion Recognition with Contrastive Representation Learning

46Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.

Abstract

Extracting emotions from physiological signals has become popular over the past decade. Recent advancements in wearable smart devices have enabled capturing physiological signals continuously and unobtrusively. However, signal readings from different smart wearables are lossy due to user activities, making it difficult to develop robust models for emotion recognition. Also, the limited availability of data labels is an inherent challenge for developing machine learning techniques for emotion classification. This paper presents a novel self-supervised approach inspired by contrastive learning to address the above challenges. In particular, our proposed approach develops a method to learn representations of individual physiological signals, which can be used for downstream classification tasks. Our evaluation with four publicly available datasets shows that the proposed method surpasses the emotion recognition performance of state-of-the-art techniques for emotion classification. In addition, we show that our method is more robust to losses in the input signal.

References Powered by Scopus

Going deeper with convolutions

40237Citations
N/AReaders
Get full text

Representation learning: A review and new perspectives

10014Citations
N/AReaders
Get full text

Recurrent Neural Networks for Multivariate Time Series with Missing Values

1566Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Emotion recognition in EEG signals using deep learning methods: A review

89Citations
N/AReaders
Get full text

Emotion recognition from unimodal to multimodal analysis: A review

81Citations
N/AReaders
Get full text

Review of Studies on Emotion Recognition and Judgment Based on Physiological Signals

67Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Dissanayake, V., Seneviratne, S., Rana, R., Wen, E., Kaluarachchi, T., & Nanayakkara, S. (2022). SigRep: Toward Robust Wearable Emotion Recognition with Contrastive Representation Learning. IEEE Access, 10, 18105–18120. https://doi.org/10.1109/ACCESS.2022.3149509

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 21

78%

Researcher 5

19%

Lecturer / Post doc 1

4%

Readers' Discipline

Tooltip

Computer Science 14

52%

Engineering 10

37%

Medicine and Dentistry 2

7%

Social Sciences 1

4%

Save time finding and organizing research with Mendeley

Sign up for free