MCIC: Multimodal Conversational Intent Classification for E-commerce Customer Service

6Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conversational intent classification (CIC) plays a significant role in dialogue understanding, and most previous works only focus on the text modality. Nevertheless, in real conversations of E-commerce customer service, users often send images (screenshots and photos) among the text, which makes multimodal CIC a challenging task for customer service systems. To understand the intent of a multimodal conversation, it is essential to understand the content of both text and images. In this paper, we construct a large-scale dataset for multimodal CIC in the Chinese E-commerce scenario, named MCIC, which contains more than 30,000 multimodal dialogues with image categories, OCR text (the text contained in images), and intent labels. To fuse visual and textual information effectively, we design two vision-language baselines to integrate either images or OCR text with the dialogue utterances. Experimental results verify that both the text and images are important for CIC in E-commerce customer service.

Cite

CITATION STYLE

APA

Yuan, S., Shen, X., Zhao, Y., Liu, H., Yan, Z., Liu, R., & Chen, M. (2022). MCIC: Multimodal Conversational Intent Classification for E-commerce Customer Service. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13551 LNAI, pp. 749–761). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-17120-8_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free