OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection

1Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

We propose a multilingual adversarial training model for determining whether a sentence contains an idiomatic expression. Given that a key challenge with this task is the limited size of annotated data, our model relies on pre-trained contextual representations from different multilingual state-of-the-art transformer-based language models (i.e., multilingual BERT and XLM-RoBERTa), and on adversarial training, a training method for further enhancing model generalization and robustness. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model achieved competitive results and ranked 6th place in SubTask A (zero-shot) setting and 15th place in SubTask A (one-shot) setting.

Cite

CITATION STYLE

APA

Pereira, L. K., & Kobayashi, I. (2022). OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection. In SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 217–220). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.semeval-1.27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free