Team Stanford ACMLab at SemEval-2022 Task 4: Textual Analysis of PCL Using Contextual Word Embeddings

0Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

We propose the use of a contextual embedding based-neural model on strictly textual inputs to detect the presence of patronizing or condescending language (PCL). We finetuned a pre-trained BERT model to detect whether or not a paragraph contained PCL (Subtask 1), and furthermore finetuned another pre-trained BERT model to identify the linguistic techniques used to convey the PCL (Subtask 2). Results show that this approach is viable for binary classification of PCL, but breaks when attempting to identify the PCL techniques. Our system placed 32/79 for subtask 1, and 40/49 for subtask 2.

Cite

CITATION STYLE

APA

Dass-Vattam, U., Wallace, S., Sikand, R., Witzel, Z., & Tang, J. (2022). Team Stanford ACMLab at SemEval-2022 Task 4: Textual Analysis of PCL Using Contextual Word Embeddings. In SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 418–420). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.semeval-1.56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free