UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition Extraction

6Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

This work presents our contribution in the context of the 6th task of SemEval-2020: Extracting Definitions from Free Text in Textbooks (DeftEval). This competition consists of three subtasks with different levels of granularity: (1) classification of sentences as definitional or non-definitional, (2) labeling of definitional sentences, and (3) relation classification. We use various pretrained language models (i.e., BERT, XLNet, RoBERTa, SciBERT, and ALBERT) to solve each of the three subtasks of the competition. Specifically, for each language model variant, we experiment by both freezing its weights and fine-tuning them. We also explore a multi-task architecture that was trained to jointly predict the outputs for the second and the third subtasks. Our best performing model evaluated on the DeftEval dataset obtains the 32nd place for the first subtask and the 37th place for the second subtask. The code is available for further research at: https://github.com/avramandrei/DeftEval.

Cite

CITATION STYLE

APA

Avram, A. M., Cercel, D. C., & Chiru, C. G. (2020). UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition Extraction. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 737–745). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.97

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free