Multi-source Semantic Graph-based Multimodal Sarcasm Explanation Generation

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Multimodal Sarcasm Explanation (MuSE) is a new yet challenging task, which aims to generate a natural language sentence for a multimodal social post (an image as well as its caption) to explain why it contains sarcasm. Although the existing pioneer study has achieved great success with the BART backbone, it overlooks the gap between the visual feature space and the decoder semantic space, the object-level metadata of the image, as well as the potential external knowledge. To solve these limitations, in this work, we propose a novel mulTi-source sEmantic grAph-based Multimodal sarcasm explanation scheme, named TEAM. In particular, TEAM extracts the object-level semantic meta-data instead of the traditional global visual features from the input image. Meanwhile, TEAM resorts to ConceptNet to obtain the external related knowledge concepts for the input text and the extracted object metadata. Thereafter, TEAM introduces a multi-source semantic graph that comprehensively characterize the multi-source (i.e., caption, object meta-data, external knowledge) semantic relations to facilitate the sarcasm reasoning. Extensive experiments on a public released dataset MORE verify the superiority of our model over cutting-edge methods.

Cite

CITATION STYLE

APA

Jing, L., Song, X., Ouyang, K., Jia, M., & Nie, L. (2023). Multi-source Semantic Graph-based Multimodal Sarcasm Explanation Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 11349–11361). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.635

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free