Stacking of BERT and CNN Models for Arabic Word Sense Disambiguation

5Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a new approach for Arabic Word Sense Disambiguation (AWSD) by hybridization of single-layer Convolutional Neural Network (CNN) with contextual representation (BERT). WSD is the task of automatically detecting the correct meaning of a word used in a given context. WSD can be performed as a classification task, and the context is generally a short sentence. Kim [26] proved that combining a CNN with an RNN (recurrent neural network) provides a good result for text classification. Here, we use a concatenation of BERT models as a word embedding to get simultaneously the target and context representation. Our approach improves the performance of WSD in Arabic languages. The experimental results show that our model outperforms the state-of-the-art approaches and improves the accuracy of 96.42% on the Arabic WordNet dataset.

Cite

CITATION STYLE

APA

Saidi, R., & Jarray, F. (2023). Stacking of BERT and CNN Models for Arabic Word Sense Disambiguation. ACM Transactions on Asian and Low-Resource Language Information Processing, 22(11). https://doi.org/10.1145/3623379

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free