MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation

114Citations
Citations of this article
132Readers
Mendeley users who have this article in their library.

Abstract

Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users' emotions and generate empathetic responses. However, most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal information through feature concatenation. In order to explore a more effective way of utilizing both multimodal and long-distance contextual information, we propose a new model based on multimodal fused graph convolutional network, MMGCN, in this work. MMGCN can not only make use of multimodal dependencies effectively, but also leverage speaker information to model inter-speaker and intra-speaker dependency. We evaluate our proposed model on two public benchmark datasets, IEMOCAP and MELD, and the results prove the effectiveness of MMGCN, which outperforms other SOTA methods by a significant margin under the multimodal conversation setting.

Cite

CITATION STYLE

APA

Hu, J., Liu, Y., Zhao, J., & Jin, Q. (2021). MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 5666–5675). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.440

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free