Abstract
Facial micro-expression computational analyses are becoming a prevalent research area, automatically micro-expression spotting as the first-in-the-pipeline problem has not been resolved yet. There are two main factors that confine the performance of current studies. 1) Subtle involuntary movements of micro-expression are hard to capture. 2) Micro-expression datasets are relatively small that can not fully support the training of deep neural networks. For the first problem, we propose modeling the expression movements from the view of consecutive frames in the wavelet space as temporal features. Combined with spatial features encoded by a convolutional neural network, temporal and spatial information can supplement each other in further analyses. For the second problem, we adopt transfer learning from other emotion-related tasks since the facial prior is homologous to our task. To train our model, we covert the spotting task to a frame-level classification task, meanwhile, weighted focal loss is used to deal with severe class imbalance. With leave-one-subject-out cross-validation, our method reports F1-score of 0.1763 and 0.1360 for CAS(ME)2 and SAMM-LV respectively. Code is available at https://github.com/guanjz20/MM21_FME_solution.
Author supplied keywords
Cite
CITATION STYLE
Guan, J., & Shen, D. (2021). Transfer Spatio-Temporal Knowledge from Emotion-Related Tasks for Facial Expression Spotting. In FME 2021 - Proceedings of the 1st Workshop on Facial Micro-Expression: Advanced Techniques for Facial Expressions Generation and Spotting (pp. 19–24). Association for Computing Machinery, Inc. https://doi.org/10.1145/3476100.3484461
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.