Semantic oppositeness assisted deep contextual modeling for automatic rumor detection in social networks

7Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

Abstract

Social networks face a major challenge in the form of rumors and fake news, due to their intrinsic nature of connecting users to millions of others, and of giving any individual the power to post anything. Given the rapid, widespread dissemination of information in social networks, manually detecting suspicious news is sub-optimal. Thus, research on automatic rumor detection has become a necessity. Previous works in the domain have utilized the reply relations between posts, as well as the semantic similarity between the main post and its context, consisting of replies, in order to obtain state-of-the-art performance. In this work, we demonstrate that semantic oppositeness can improve the performance on the task of rumor detection. We show that semantic oppositeness captures elements of discord, which are not properly covered by previous efforts, which only utilize semantic similarity or reply structure. Our proposed model learns both explicit and implicit relations between the main tweet and its replies, by utilizing both semantic similarity and semantic oppositeness. Both of these employ the self-attention mechanism in neural text modeling, with semantic oppositeness utilizing word-level self-attention, and with semantic similarity utilizing post-level self-attention. We show, with extensive experiments on recent data sets for this problem, that our proposed model achieves state-of-the-art performance. Further, we show that our model is more resistant to the variances in performance introduced by randomness.

Cite

CITATION STYLE

APA

de Silva, N., & Dou, D. (2021). Semantic oppositeness assisted deep contextual modeling for automatic rumor detection in social networks. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 405–415). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free