Semantic Embeddings for Arabic Retrieval Augmented Generation (ARAG)

15Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

Abstract

In recent times, Retrieval Augmented Generation (RAG) models have garnered considerable attention, primarily due to the impressive capabilities exhibited by Large Language Models (LLMs). Nevertheless, the Arabic language, despite its significance and widespread use, has received relatively less research emphasis in this field. A critical element within RAG systems is the Information Retrieval component, and at its core lies the vector embedding process commonly referred to as “semantic embedding”. This study encompasses an array of multilingual semantic embedding models, intending to enhance the model’s ability to comprehend and generate Arabic text effectively. We conducted an extensive evaluation of the performance of ten cutting-edge Multilingual Semantic embedding models, employing a publicly available ARCD dataset as a benchmark and assessing their performance using the average Recall@k metric. The results showed that the Microsoft E5 sentence embedding model outperformed all other models on the ARCD dataset, with Recall@10 exceeding 90%

Cite

CITATION STYLE

APA

Abdelazim, H., Tharwat, M., & Mohamed, A. (2023). Semantic Embeddings for Arabic Retrieval Augmented Generation (ARAG). International Journal of Advanced Computer Science and Applications, 14(11), 1328–1334. https://doi.org/10.14569/IJACSA.2023.01411135

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free