Recognition of facial expressions by cortical multi-scale line and edge coding

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Face-to-face communications between humans involve emotions, which often are unconsciously conveyed by facial expressions and body gestures. Intelligent human-machine interfaces, for example in cognitive robotics, need to recognize emotions. This paper addresses facial expressions and their neural correlates on the basis of a model of the visual cortex: the multi-scale line and edge coding. The recognition model links the cortical representation with Paul Ekman's Action Units which are related to the different facial muscles. The model applies a top-down categorization with trends and magnitudes of displacements of the mouth and eyebrows based on expected displacements relative to a neutral expression. The happy vs. not-happy categorization yielded a correct recognition rate of 91%, whereas final recognition of the six expressions happy, anger, disgust, fear, sadness and surprise resulted in a rate of 78%. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

De Sousa, R. J. R., Rodrigues, J. M. F., & Du Buf, J. M. H. (2010). Recognition of facial expressions by cortical multi-scale line and edge coding. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6111 LNCS, pp. 415–424). https://doi.org/10.1007/978-3-642-13772-3_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free