Facial expression recognition based on dimension model of emotion with autonomously extracted sparse representations

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a facial expression recognition system based on dimension model of internal states with autonomously extracted sparse representations. Sparse representations of facial expressions are extracted to the three steps. In the first step, Gabor wavelet representation can extract edges of face components. In the second step, sparse features of facial expressions are extracted using fuzzy C-means(FCM) clustering algorithm on neutral faces, and in the third step, are extracted using the Dynamic Linking Model(DLM) on expression images. Finally, we show the recognition of facial expressions based on the dimension model of internal states using a multi-layer perceptron. With dimension model we have improved the limitation of expression recognition based on basic emotions, and have extracted features automatically with a new approach using FCM algorithm and the dynamic linking model. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Shin, Y. S. (2004). Facial expression recognition based on dimension model of emotion with autonomously extracted sparse representations. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3072, 81–87. https://doi.org/10.1007/978-3-540-25948-0_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free