Approaches for Fake Content Detection: Strengths and Weaknesses to Adversarial Attacks

14Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the last few years, we have witnessed an explosive growth of fake content on the Internet which has significantly affected the veracity of information on many social platforms. Much of this disruption has been caused by the proliferation of advanced machine and deep learning methods. In turn, social platforms have been using the same technological methods in order to detect fake content. However, there is understanding of the strengths and weaknesses of these detection methods. In this article, we describe examples of machine and deep learning approaches that can be used to detect different types of fake content. We also discuss the characteristics and the potential for adversarial attacks on these methods that could reduce the accuracy of fake content detection. Finally, we identify and discuss some future research challenges in this area.

Cite

CITATION STYLE

APA

Carter, M., Tsikerdekis, M., & Zeadally, S. (2021). Approaches for Fake Content Detection: Strengths and Weaknesses to Adversarial Attacks. IEEE Internet Computing, 25(2), 73–83. https://doi.org/10.1109/MIC.2020.3032323

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free