How do we interpret the rapidly changing visual stimuli we encounter? How does our past visual experience shape our perception? Recent work has suggested that our visual system is able to interpret multiple faces presented temporally via integration or ensemble coding. Visual adaptation is widely used to probe such short term plasticity. Here we use an adaptation paradigm to investigate whether integration or averaging of emotional faces occurs during a rapid serial visual presentation (RSVP). In four experiments, we tested whether the RSVP of distinct emotional faces could induce adaptation aftereffects and whether these aftereffects were of similar magnitudes as their statistically averaged face. Experiment 1 showed that the RSVP faces could generate significant facial expression aftereffects (FEAs) across happy and sad emotions. Experiment 2 revealed that the magnitudes of the FEAs from RSVP faces and their paired average faces were comparable and significantly correlated. Experiment 3 showed that the FEAs depended on the mean emotion of the face stream, regardless of variations in emotion or the temporal frequency of the stream. Experiment 4 further indicated that the emotion of the average face of the stream, but not the emotion of individual faces matched for identity to the test faces, determined the FEAs. Together, our results suggest that the visual system interprets rapidly presented faces by ensemble coding, and thus implies the formation of a facial expression norm in face space.
CITATION STYLE
Ying, H., & Xu, H. (2017). Adaptation reveals that facial expression averaging occurs during rapid serial presentation. Journal of Vision, 17(1). https://doi.org/10.1167/17.1.15
Mendeley helps you to discover research relevant for your work.