Multi-Modal Sarcasm Detection via Cross-Modal Graph Convolutional Network

83Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

With the increasing popularity of posting multimodal messages online, many recent studies have been carried out utilizing both textual and visual information for multi-modal sarcasm detection. In this paper, we investigate multimodal sarcasm detection from a novel perspective by constructing a cross-modal graph for each instance to explicitly draw the ironic relations between textual and visual modalities. Specifically, we first detect the objects paired with descriptions of the image modality, enabling the learning of important visual information. Then, the descriptions of the objects are served as a bridge to determine the importance of the association between the objects of image modality and the contextual words of text modality, so as to build a cross-modal graph for each multi-modal instance. Furthermore, we devise a cross-modal graph convolutional network to make sense of the incongruity relations between modalities for multi-modal sarcasm detection. Extensive experimental results and in-depth analysis show that our model achieves state-of-the-art performance in multi-modal sarcasm detection.

Cite

CITATION STYLE

APA

Liang, B., Lou, C., Li, X., Yang, M., Gui, L., He, Y., … Xu, R. (2022). Multi-Modal Sarcasm Detection via Cross-Modal Graph Convolutional Network. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1767–1777). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free