Leveraging Intra and Inter Modality Relationship for Multimodal Fake News Detection

37Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

Recent years have witnessed a massive growth in the proliferation of fake news online. User-generated content is a blend of text and visual information leading to producing different variants of fake news. As a result, researchers started targeting multimodal methods for fake news detection. Existing methods capture high-level information from different modalities and jointly model them to decide. Given multiple input modalities, we hypothesize that not all modalities may be equally responsible for decision-making. Hence, this paper presents a novel architecture that effectively identifies and suppresses information from weaker modalities and extracts relevant information from the strong modality on a per-sample basis. We also establish intra-modality relationship by extracting fine-grained image and text features. We conduct extensive experiments on real-world datasets to show that our approach outperforms the state-of-the-art by an average of 3.05% and 4.525% on accuracy and F1-score, respectively. We also release the code, implementation details, and model checkpoints for the community's interest.1

Cite

CITATION STYLE

APA

Singhal, S., Pandey, T., Mrig, S., Shah, R. R., & Kumaraguru, P. (2022). Leveraging Intra and Inter Modality Relationship for Multimodal Fake News Detection. In WWW 2022 - Companion Proceedings of the Web Conference 2022 (pp. 726–734). Association for Computing Machinery, Inc. https://doi.org/10.1145/3487553.3524650

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free