On Explaining Multimodal Hateful Meme Detection Models

30Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hateful meme detection is a new multimodal task that has gained significant traction in academic and industry research communities. Recently, researchers have applied pre-trained visual-linguistic models to perform the multimodal classification task, and some of these solutions have yielded promising results. However, what these visual-linguistic models learn for the hateful meme classification task remains unclear. For instance, it is unclear if these models are able to capture the derogatory or slurs references in multimodality (i.e., image and text) of the hateful memes. To fill this research gap, this paper propose three research questions to improve our understanding of these visual-linguistic models performing the hateful meme classification task. We found that the image modality contributes more to the hateful meme classification task, and the visual-linguistic models are able to perform visual-text slurs grounding to a certain extent. Our error analysis also shows that the visual-linguistic models have acquired biases, which resulted in false-positive predictions.

Cite

CITATION STYLE

APA

Hee, M. S., Lee, R. K. W., & Chong, W. H. (2022). On Explaining Multimodal Hateful Meme Detection Models. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 3651–3655). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3512260

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free