Bimodal expression of emotion by face and voice

42Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

A goal of research m human-computer Interaction IS computer systems that can recognize and understand nonverbal communtcatlon In a series of studlcs, we developed semiautomated methods of dlscrlmmatmg emotion and para-lmgulstlc communlcatton In face and voice In study I, three computervlslon based modules reliably recognized FACS action units, which are the smallest vlslbly dlscrlmmable changes m facial expresslon Automated Face Analysis demonstrated convergent valldlty with manual codmg for 15 actlon umts and actlon unit comblnatlons central to the expression of emotion In study 2, prosodic measures dlscrlmlnated pragmatic Intent m Infantdirected speech with accuracy rangmg from Gl-65% m test samples In study 3, facial EMG and prosodic measures combined dlscrlmmatcd between negative, neutral, and posltlve emotion with accuracy ranging from 47-79% In test samples These results support the feaslbillty of human-computer Interfaces that arc sensltlvc to the full range of human nonverbal communication.

Cite

CITATION STYLE

APA

Cohn, J. F., & Katz, G. S. (1998). Bimodal expression of emotion by face and voice. In Proceedings of the 6th ACM International Conference on Multimedia: Face/Gesture Recognition and their Applications, MULTIMEDIA 1998 (pp. 41–44). Association for Computing Machinery, Inc. https://doi.org/10.1145/306668.306683

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free