Team "noConflict" at CASE 2021 Task 1: Pretraining for Sentence-Level Protest Event Detection

8Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An ever-increasing amount of text, in the form of social media posts and news articles, gives rise to new challenges and opportunities for the automatic extraction of socio-political events. In this paper, we present our submission1 to the Shared Tasks on Socio-Political and Crisis Events Detection, Task 1, Multilingual Protest News Detection, Subtask 2, Event Sentence Classification, of CASE @ ACLIJCNLP 2021. In our submission, we utilize the RoBERTa model with additional pretraining, and achieve the best F1 score of 0:8532 in event sentence classification in English and the second-best F1 score of 0:8700 in Portuguese via simple translation. We analyze the failure cases of our model. We also conduct an ablation study to show the effect of choosing the right pretrained language model, adding additional training data and data augmentation.

Cite

CITATION STYLE

APA

Hu, T., & Stoehr, N. (2021). Team “noConflict” at CASE 2021 Task 1: Pretraining for Sentence-Level Protest Event Detection. In 4th Workshop on Challenges and Applications of Automated Extraction of Socio-Political Events from Text, CASE 2021 - Proceedings (pp. 152–160). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.case-1.20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free