Abstract
We address the task of distinguishing implicitly abusive sentences on identity groups (Muslims terrorize the world daily) from other group-related negative polar sentences (Muslims despise terrorism). Implicitly abusive language are utterances not conveyed by abusive words (e.g. bimbo or scum). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
Cite
CITATION STYLE
Wiegand, M., Eder, E., & Ruppenhofer, J. (2022). Identifying Implicitly Abusive Remarks about Identity Groups using a Linguistically Informed Approach. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 5600–5612). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.410
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.