Audio-visual affective expression recognition through multistream fused HMM

117Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Advances in computer processing power and emerging algorithms are allowing new ways of envisioning Human-Computer Interaction. Although the benefit of audio-visual fusion is expected for affect recognition from the psychological and engineering perspectives, most of existing approaches to automatic human affect analysis are unimodal: information processed by computer system is limited to either face images or the speech signals. This paper focuses on the development of a computing algorithm that uses both audio and visual sensors to detect and track a user's affective state to aid computer decision making. Using our Multistream Fused Hidden Markov Model (MFHMM), we analyzed coupled audio and visual streams to detect four cognitive states (interest, boredom, frustration and puzzlement) and seven prototypical emotions (neural, happiness, sadness, anger, disgust, fear and surprise). The MFHMM allows the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. Person-independent experimental results from 20 subjects in 660 sequences show that the MFHMM approach outperforms face-only HMM, pitch-only HMM, energy-only HMM, and independent HMM fusion, under clean and varying audio channel noise condition. © 2008 IEEE.

Cite

CITATION STYLE

APA

Zeng, Z., Tu, J., Pianfetti, B. M., & Huang, T. S. (2008). Audio-visual affective expression recognition through multistream fused HMM. IEEE Transactions on Multimedia, 10(4), 570–577. https://doi.org/10.1109/TMM.2008.921737

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free