Quantifying Facial Expression Intensity and Signal Use in Deaf Signers

11Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel.We thus compared early and profoundly deaf signers (n =46) with hearing nonsigners (n =48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential codin of facial expressions of emotion between hearing and deaf individuals.

Cite

CITATION STYLE

APA

Stoll, C., Rodger, H., Lao, J., Richoz, A. R., Pascalis, O., Dye, M., & Caldara, R. (2019). Quantifying Facial Expression Intensity and Signal Use in Deaf Signers. Journal of Deaf Studies and Deaf Education, 24(4), 346–355. https://doi.org/10.1093/deafed/enz023

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free