Unsupervised learning of facial emotion decoding skills

Citations of this article
Mendeley users who have this article in their library.


Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant's response or the sender's true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple visual stimuli described in previous studies and practice effects often observed in cognitive tasks. © 2014 Huelle, Sack, Broer, Komlewa and Anders.




Huelle, J. O., Sack, B., Broer, K., Komlewa, I., & Anders, S. (2014). Unsupervised learning of facial emotion decoding skills. Frontiers in Human Neuroscience, 8(1 FEB). https://doi.org/10.3389/fnhum.2014.00077

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free