ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD

8Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

Using pre-trained transformer models such as BERT has proven to be effective in many NLP tasks. This paper presents our work to fine-tune BERT models for Arabic Word Sense Disambiguation (WSD). We treated the WSD task as a sentence-pair binary classification task. First, we constructed a dataset of labeled Arabic context-gloss pairs (~167k pairs) we extracted from the Arabic Ontology and the large lexicographic database available at Birzeit University. Each pair was labeled as True or False and target words in each context were identified and annotated. Second, we used this dataset for fine-tuning three pre-trained Arabic BERT models. Third, we experimented the use of different supervised signals used to emphasize target words in context. Our experiments achieved promising results (accuracy of 84%) although we used a large set of senses in the experiment.

Cite

CITATION STYLE

APA

Al-Hajj, M., & Jarrar, M. (2021). ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD. In International Conference Recent Advances in Natural Language Processing, RANLP (pp. 35–43). Incoma Ltd. https://doi.org/10.26615/978-954-452-072-4_005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free