Action Decouple Multi-Tasking for Micro-Expression Recognition

N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Micro-expressions are brief, involuntary facial movements that reveal genuine emotions. However, extracting and learning features from micro-expressions poses challenges due to their short duration and low intensity. To address this problem, we propose the ADMME (Action Decouple Multi-tasking for Micro-Expression Recognition) method. In our model, we adopt a pseudo-siamese network architecture and leverage contrastive learning to obtain a better representation of micro-expression motion features. During model training, we utilize focal loss to handle the class imbalance issue in micro-expression datasets. Additionally, we introduce an AU (Action Unit) detection task, which provides a new inductive bias for micro-expression detection, enhancing the model's generalization and robustness. Through five-class classification experiments conducted on the CASMEII and SAMM datasets, we achieve accuracy rates of 86.34% and 81.28%, with F1 scores of 0.8635 and 0.8168, respectively. These results validate the effectiveness of our method in micro-expression recognition tasks. Furthermore, we validate the effectiveness of each component of our approach through a series of ablation experiments.

Cite

CITATION STYLE

APA

Wang, Y., Shi, H., & Wang, R. (2023). Action Decouple Multi-Tasking for Micro-Expression Recognition. IEEE Access, 11, 82978–82988. https://doi.org/10.1109/ACCESS.2023.3301950

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free