Deep multimodal emotion recognition on human speech: A review

31Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

This work reviews the state of the art in multimodal speech emotion recognition method-ologies, focusing on audio, text and visual information. We provide a new, descriptive categorization of methods, based on the way they handle the inter-modality and intra-modality dynamics in the temporal dimension: (i) non-temporal architectures (NTA), which do not significantly model the temporal dimension in both unimodal and multimodal interaction; (ii) pseudo-temporal architectures (PTA), which also assume an oversimplification of the temporal dimension, although in one of the unimodal or multimodal interactions; and (iii) temporal architectures (TA), which try to capture both unimodal and cross-modal temporal dependencies. In addition, we review the basic feature representation methods for each modality, and we present aggregated evaluation results on the reported methodologies. Finally, we conclude this work with an in-depth analysis of the future challenges related to validation procedures, representation learning and method robustness.

Cite

CITATION STYLE

APA

Koromilas, P., & Giannakopoulos, T. (2021, September 1). Deep multimodal emotion recognition on human speech: A review. Applied Sciences (Switzerland). MDPI. https://doi.org/10.3390/app11177962

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free