Unmasking Fake Social Network Accounts with Explainable Intelligence

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The recent global social network platforms have intertwined a web connecting people universally, encouraging unprecedented social interactions and information exchange. However, this digital connectivity has also spawned the growth of fake social media accounts used for mass spamming and targeted attacks on certain accounts or sites. In response, carefully constructed artificial intelligence (AI) models have been used across numerous digital domains as a defense against these dishonest accounts. However, clear articulation and validation are required to integrate these AI models into security and commerce. This study navigates this crucial turning point by using Explainable AI’s SHAP technique to explain the results of an XGBoost model painstakingly trained on a pair of datasets collected from Instagram and Twitter. These outcomes are painstakingly inspected, assessed, and benchmarked against traditional feature selection techniques using SHAP. This analysis comes to a head in a demonstrative discourse demonstrating SHAP’s suitability as a reliable explainable AI (XAI) for this crucial goal.

Cite

CITATION STYLE

APA

Alnagi, E., Ahmad, A., Al-Haija, Q. A., & Aref, A. (2024). Unmasking Fake Social Network Accounts with Explainable Intelligence. International Journal of Advanced Computer Science and Applications, 15(3), 1277–1283. https://doi.org/10.14569/IJACSA.2024.01503125

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free