Elevating Fake News Detection Through Deep Neural Networks, Encoding Fused Multi-Modal Features

1Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Textual content was initially the main focus of traditional methods for detecting fake news, and these methods have yielded appointed results. However, with the exponential growth of social media platforms, there has been a significant shift towards visual content. Consequently, traditional detection methods have become inadequate for completely detecting fake news. This paper proposes a model for detecting fake news using multi-modal features. The model involves feature extraction, feature fusion, dimension reduction, and classification as its main processes. To extract various textual features, a pre-trained BERT, gated recurrent unit (GRU), and convolutional neural network (CNN) are utilized. For extracting image features, ResNet-CBAM is used, followed by the fusion of multi-type features. The dimensionality of fused features is reduced using an auto-encoder, and the FLN classifier is then applied to the encoded features to detect instances of fake news. Experimental findings on two multi-modal datasets, Weibo and Fakeddit, demonstrate that the proposed model effectively detects fake news from multi-modal data, achieving 88% accuracy with Weibo and 98% accuracy with Fakeddit. This shows that the proposed model is preferable to previous works and more effective with the large dataset.

Cite

CITATION STYLE

APA

Hashim Jawad Almarashy, A., Feizi-Derakhshi, M. R., & Salehpour, P. (2024). Elevating Fake News Detection Through Deep Neural Networks, Encoding Fused Multi-Modal Features. IEEE Access, 12, 82146–82155. https://doi.org/10.1109/ACCESS.2024.3411926

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free