The prevalence of online misinformation, termed “fake news”, has exponentially escalated in recent years. These deceptive information, often rich with multimodal content, can easily deceive individuals into spreading them via various social media platforms. This has made it a hot research topic to automatically detect multimodal fake news. Existing works made a great progress on inter-modality feature fusion or semantic interaction yet largely ignore the importance of intra-modality entities and feature aggregation. This imbalance causes them to perform erratically on data with different emphases. In the realm of authentic news, the intra-modality contents and the inter-modality relationship should be in mutually supportive relationships. Inspired by this idea, we propose an innovative approach to multimodal fake news detection (IFIS), incorporating both intra-modality feature aggregation and inter-modality semantic fusion. Specifically, the proposed model implements a entity detection module and utilizes attention mechanisms for intra-modality feature aggregation, whereas inter-modality semantic fusion is accomplished via two concurrent Co-attention blocks. The performance of IFIS is extensively tested on two datasets, namely Weibo and Twitter, and has demonstrated superior performance, surpassing various advanced methods by 0.6 The experimental results validate the capability of our proposed approach in offering the most balanced performance for multimodal fake news detection tasks.
CITATION STYLE
Zhu, P., Hua, J., Tang, K., Tian, J., Xu, J., & Cui, X. (2024). Multimodal fake news detection through intra-modality feature aggregation and inter-modality semantic fusion. Complex and Intelligent Systems, 10(4), 5851–5863. https://doi.org/10.1007/s40747-024-01473-5
Mendeley helps you to discover research relevant for your work.