Abstract
In the digital era, the news ecosystem has shifted from traditional print media to fast-paced social platforms where information spreads instantly. However, reduced editorial oversight has made these platforms vulnerable to widespread misinformation. This paper proposes a novel, explainable artificial intelligence (AI) framework that combines Bidirectional Encoder Representations from Transformers (BERT) with Long Short-Term Memory (LSTM) networks to detect and classify disinformation. The study offers both technical and theoretical advancements. Technically, the hybrid BERT–LSTM model demonstrates significantly higher accuracy than traditional methods in misinformation detection. Methodologically, the model incorporates explainability through Shapley Additive Explanations, which quantify how key social media engagement features—such as likes, comments, and shares—influence model predictions. This analysis reveals both the individual and combined effects of these features on misinformation classification. Theoretically, the research advances explainable AI by introducing a dual contribution: a hybrid model architecture and a feature association analysis. Practically, the proposed model offers a transparent and effective tool for misinformation mitigation, supporting social media platforms and regulatory agencies in strengthening content governance and fostering a healthier digital information environment.
Author supplied keywords
Cite
CITATION STYLE
Xia, H., Islam, N., Zhu, D., Zhang, J. Z., Behl, A., & Roohanifar, M. (2026). A novel method of fusing Bert-LSTM and XAI for social media disinformation identification. Annals of Operations Research. https://doi.org/10.1007/s10479-026-07063-7
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.