Using deep learning models for learning semantic text similarity of Arabic questions

14Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Question-answering platforms serve millions of users seeking knowledge and solutions for their daily life problems. However, many knowledge seekers are facing the challenge to find the right answer among similar answered questions and writer's responding to asked questions feel like they need to repeat answers many times for similar questions. This research aims at tackling the problem of learning the semantic text similarity among different asked questions by using deep learning. Three models are implemented to address the aforementioned problem: I) a supervised-machine learning model using XGBoost trained with pre-defined features, ii) an adapted Siamese-based deep learning recurrent architecture trained with pre-defined features, and iii) a pre-trained deep bidirectional transformer based on BERT model. Proposed models were evaluated using a reference Arabic dataset from the mawdoo3.com company. Evaluation results show that the BERT-based model outperforms the other two models with an F1=92.99%, whereas the Siamese-based model comes in the second place with F1=89.048%, and finally, the XGBoost as a baseline model achieved the lowest result of F1=86.086%.

Cite

CITATION STYLE

APA

Hammad, M., Al-Smadi, M., Baker, Q. B., & Al-Zboon, S. A. (2021). Using deep learning models for learning semantic text similarity of Arabic questions. International Journal of Electrical and Computer Engineering, 11(4), 3519–3528. https://doi.org/10.11591/ijece.v11i4.pp3519-3528

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free