Multimodal Framework for Analyzing the Affect of a Group of People

30Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the advances in multimedia and the world wide web, users upload millions of images and videos everyone on social networking platforms on the Internet. From the perspective of automatic human behavior understanding, it is of interest to analyze and model the affects that are exhibited by groups of people who are participating in social events in these images. However, the analysis of the affect that is expressed by multiple people is challenging due to the varied indoor and outdoor settings. Recently, a few interesting works have investigated face-based group-level emotion recognition (GER). In this paper, we propose a multimodal framework for enhancing the affective analysis ability of GER in challenging environments. Specifically, for encoding a person's information in a group-level image, we first propose an information aggregation method for generating feature descriptions of face, upper body, and scene. Later, we revisit localized multiple kernel learning for fusing face, upper body, and scene information for GER against challenging environments. Intensive experiments are performed on two challenging group-level emotion databases (HAPPEI and GAFF) to investigate the roles of the face, upper body, scene information, and the multimodal framework. Experimental results demonstrate that the multimodal framework achieves promising performance for GER.

Cite

CITATION STYLE

APA

Huang, X., Dhall, A., Goecke, R., Pietikainen, M., & Zhao, G. (2018). Multimodal Framework for Analyzing the Affect of a Group of People. IEEE Transactions on Multimedia, 20(10), 2706–2721. https://doi.org/10.1109/TMM.2018.2818015

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free