Cardiff University at SemEval-2020 Task 6: Fine-tuning BERT for Domain-Specific Definition Classification

6Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe the system submitted to SemEval-2020 Task 6, Subtask 1. The aim of this subtask is to predict whether a given sentence contains a definition or not. Unsurprisingly, we found that strong results can be achieved by fine-tuning a pre-trained BERT language model. In this paper, we analyze the performance of this strategy. Among others, we show that results can be improved by using a two-step fine-tuning process, in which the BERT model is first fine-tuned on the full training set, and then further specialized towards a target domain.

Cite

CITATION STYLE

APA

Jeawak, S. S., Espinosa-Anke, L., & Schockaert, S. (2020). Cardiff University at SemEval-2020 Task 6: Fine-tuning BERT for Domain-Specific Definition Classification. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 361–366). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free