Zero-Shot Facial Expression Recognition with Multi-label Label Propagation

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Facial expression recognition classifies a face image into one of several discrete emotional categories. We have a lot of exclusive or non-exclusive emotional classes to describe the varied and nuancing meaning conveyed by facial expression. However, it is almost impossible to enumerate all the emotional categories and collect adequate annotated samples for each category. To this end, we propose a zero-shot learning framework with multi-label label propagation (Z-ML $$^2$$ P). Z-ML $$^2$$ P is built on existing multi-class datasets annotated with several basic emotions and it can infer the existence of other new emotion labels via a learned semantic space. To evaluate the proposed method, we collect a multi-label FER dataset FaceME. Experimental results on FaceME and two other FER datasets demonstrate that Z-ML $$^2$$ P framework improves the state-of-the-art zero-shot learning methods in recognizing both seen or unseen emotions.

Cite

CITATION STYLE

APA

Lu, Z., Zeng, J., Shan, S., & Chen, X. (2019). Zero-Shot Facial Expression Recognition with Multi-label Label Propagation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11363 LNCS, pp. 19–34). Springer Verlag. https://doi.org/10.1007/978-3-030-20893-6_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free