Adversarial attacks on content-based filtering journal recommender systems

10Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommender systems are very useful for people to explore what they really need. Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers. In order to improve the efficiency of selecting the most suitable journals for publishing their works, journal recommender systems (JRS) can automatically provide a small number of candidate journals based on key information such as the title and the abstract. However, users or journal owners may attack the system for their own purposes. In this paper, we discuss about the adversarial attacks against content-based filtering JRS. We propose both targeted attack method that makes some target journals appear more often in the system and non-targeted attack method that makes the system provide incorrect recommendations. We also conduct extensive experiments to validate the proposed methods. We hope this paper could help improve JRS by realizing the existence of such adversarial attacks.

Cite

CITATION STYLE

APA

Gu, Z., Cai, Y., Wang, S., Li, M., Qiu, J., Su, S., … Tian, Z. (2020). Adversarial attacks on content-based filtering journal recommender systems. Computers, Materials and Continua, 64(3), 1755–1770. https://doi.org/10.32604/cmc.2020.010739

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free