An Interactive Exploratory Tool for the Task of Hate Speech Detection

6Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the growth of Automatic Content Moderation (ACM) on widely used social media platforms, transparency into the design of moderation technology and policy is necessary for online communities to advocate for themselves when harms occur. In this work, we describe a suite of interactive modules to support the exploration of various aspects of this technology, and particularly of those components that rely on English models and datasets for hate speech detection, a subtask within ACM. We intend for this demo to support the various stakeholders of ACM in investigating the definitions and decisions that underpin current technologies such that those with technical knowledge and those with contextual knowledge may both better understand existing systems.

Cite

CITATION STYLE

APA

McMillan-Major, A., Paullada, A., & Jernite, Y. (2022). An Interactive Exploratory Tool for the Task of Hate Speech Detection. In HCI+NLP 2022 - 2nd Workshop on Bridging Human-Computer Interaction and Natural Language Processing, Proceedings of the Workshop (pp. 11–20). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.hcinlp-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free