MREDDITSUM: A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The growing number of multimodal online discussions necessitates automatic summarization to save time and reduce content overload. However, existing summarization datasets are not suitable for this purpose, as they either do not cover discussions, multiple modalities, or both. To this end, we present MREDDITSUM, the first multimodal discussion summarization dataset. It consists of 3,033 discussion threads where a post solicits advice regarding an issue described with an image and text, and respective comments express diverse opinions. We annotate each thread with a human-written summary that captures both the essential information from the text, as well as the details available only in the image. Experiments show that popular summarization models-GPT-3.5, BART, and T5-consistently improve in performance when visual information is incorporated. We also introduce a novel method, cluster-based multi-stage summarization, that outperforms existing baselines and serves as a competitive baseline for future work.

Cite

CITATION STYLE

APA

Overbay, K., Ahn, J., Zadeh, F. P., Park, J., & Kim, G. (2023). MREDDITSUM: A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4117–4132). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.251

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free