Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

0Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy. It is imperative to have methods in place that can compare different models and identify over-reliances on specific concepts. We consider three well-known abusive language classifiers trained on large English datasets and focus on the concept of negative emotions, which is an important signal but should not be learned as a sufficient feature for the label of abuse. Motivated by the definition of global sufficiency, we first examine the unwanted dependencies learned by the classifiers by assessing their accuracy on a challenge set across all decision thresholds. Further, recognizing that a challenge set might not always be available, we introduce concept-based explanation metrics to assess the influence of the concept on the labels. These explanations allow us to compare classifiers regarding the degree of false global sufficiency they have learned between a concept and a label.

Cite

CITATION STYLE

APA

Nejadgholi, I., Kiritchenko, S., Fraser, K. C., & Balkır, E. (2023). Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 138–149). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.woah-1.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free