A distributed architecture for multimodal emotion identification

10Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a distributed multiagent system architecture for multimodal emotion identification, which is based on simultaneous analysis of physiological parameters from wearable devices, human behaviors and activities, and facial micro-expressions. Wearable devices are equipped with electrodermal activity, electrocardiogram, heart rate, and skin temperature sensor agents. Facial expressions are monitored by a vision agent installed at the height of the human’s head. Also, the activity of the user is monitored by a second vision agent mounted overhead. The emotion is refined as a cooperative decision taken at a central agent node denominated “Central Emotion Detection Node” from the local decision offered by the three agent nodes called “Face Expression Analysis Node”, “Behavior Analysis Node” and “Physiological Data Analysis Node”. This way, the emotion identification results are outperformed through an intelligent fuzzy-based decision making technique

Cite

CITATION STYLE

APA

Sokolova, M. V., Fernández-Caballero, A., López, M. T., Martínez-Rodrigo, A., Zangróniz, R., & Pastor, J. M. (2015). A distributed architecture for multimodal emotion identification. In Advances in Intelligent Systems and Computing (Vol. 372, pp. 125–132). Springer Verlag. https://doi.org/10.1007/978-3-319-19629-9_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free