Arabic Textual Entailment with Word Embeddings

32Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.

Abstract

Determining the textual entailment between texts is important in many NLP tasks, such as summarization, question answering, and information extraction and retrieval. Various methods have been suggested based on external knowledge sources; however, such resources are not always available in all languages and their acquisition is typically laborious and very costly. Distributional word representations such as word embeddings learned over large corpora have been shown to capture syntactic and semantic word relationships. Such models have contributed to improving the performance of several NLP tasks. In this paper, we address the problem of textual entailment in Arabic. We employ both traditional features and distributional representations. Crucially, we do not depend on any external resources in the process. Our suggested approach yields state of the art performance on a standard data set, ArbTE, achieving an accuracy of 76.2 % compared to current state of the art of 69.3 %.

Cite

CITATION STYLE

APA

Almarwani, N., & Diab, M. (2017). Arabic Textual Entailment with Word Embeddings. In WANLP 2017, co-located with EACL 2017 - 3rd Arabic Natural Language Processing Workshop, Proceedings of the Workshop (pp. 185–190). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/W17-1322

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free