The multimedia communications with texts and images are popular on social media. However, limited studies concern how images are structured with texts to form coherent meanings in human cognition. To fill in the gap, we present a novel concept of cross-modality discourse, reflecting how human readers couple image and text understandings. Text descriptions are first derived from images (named as subtitles) in the multimedia contexts. Five labels - entity-level insertion, projection and concretization and scene-level restatement and extension -are further employed to shape the structure of subtitles and texts and present their joint meanings. As a pilot study, we also build the very first dataset containing 16K multimedia tweets with manually annotated discourse labels. The experimental results show that the multimedia encoder based on multi-head attention with captions is able to obtain the-state-of-the-art results.
CITATION STYLE
Xu, C., Tan, H., Li, J., & Li, P. (2022). Understanding Social Media Cross-Modality Discourse in Linguistic Space. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 2459–2471). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.182
Mendeley helps you to discover research relevant for your work.