MULTISEM at SemEval-2020 Task 3: Fine-tuning BERT for Lexical Meaning

5Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present the MULTISEM systems submitted to SemEval 2020 Task 3: Graded Word Similarity in Context (GWSC). We experiment with injecting semantic knowledge into pre-trained BERT models through fine-tuning on lexical semantic tasks related to GWSC. We use existing semantically annotated datasets and propose to approximate similarity through automatically generated lexical substitutes in context. We participate in both GWSC subtasks and address two languages, English and Finnish. Our best English models occupy the third and fourth positions in the ranking for the two subtasks. Performance is lower for the Finnish models which are mid-ranked in the respective subtasks, highlighting the important role of data availability for fine-tuning.

Cite

CITATION STYLE

APA

Soler, A. G., & Apidianaki, M. (2020). MULTISEM at SemEval-2020 Task 3: Fine-tuning BERT for Lexical Meaning. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 158–165). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free