Analyzing the Use of Large Language Models for Content Moderation with ChatGPT Examples

8Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Content moderation systems are crucial in Online Social Networks (OSNs). Indeed, their role is to keep platforms and their users safe from malicious activities. However, there is an emerging consensus that such systems are unfair to fragile users and minorities. Furthermore, content moderation systems are difficult to personalize and lack effective communication between users and platforms. In this context, we propose an enhancement of the current framework of content moderation, integrating Large Language Models (LLMs) in the enforcing pipeline.

Cite

CITATION STYLE

APA

Franco, M., Gaggi, O., & Palazzi, C. E. (2023). Analyzing the Use of Large Language Models for Content Moderation with ChatGPT Examples. In Proceedings of the 2023 Workshop on Open Challenges in Online Social Networks, OASIS 2023, Held in conjunction with the 34th ACM conference on Hypertext and Social Media, HT 2023 (pp. 1–8). Association for Computing Machinery, Inc. https://doi.org/10.1145/3599696.3612895

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free