MCPN: A Multiple Cross-Perception Network for Real-Time Emotion Recognition in Conversation

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotion recognition in conversation (ERC) is crucial for developing empathetic machines. Most of the recent related works generally model the speaker interaction and context information as a static process but ignore the temporal dynamics of the interaction and the semantics in the dialogue. At the same time, the misclassification of similar emotions is also a challenge to be solved. To solve the above problems, we propose a Multiple Cross-Perception Network, MCPN, for multimodal real-time conversation scenarios. We dynamically select speaker interaction intervals for each time step, so that the model can effectively capture the dynamics of interaction. Meanwhile, we introduce the multiple cross-perception process to perceive the context and speaker state information captured by the model alternately, so that the model can capture the semantics and interaction information specific to each time step more accurately. Furthermore, we propose an emotion triple recognition process to improve the model’s ability to recognize similar emotions. Experiments on multiple datasets demonstrate the effectiveness of the proposed method.

Cite

CITATION STYLE

APA

Liu, W., & Sun, X. (2023). MCPN: A Multiple Cross-Perception Network for Real-Time Emotion Recognition in Conversation. In Communications in Computer and Information Science (Vol. 1765 CCIS, pp. 1–15). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-2401-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free