Don’t throw those morphological analyzers away just yet: Neural morphological disambiguation for Arabic

47Citations
Citations of this article
105Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a model for Arabic morphological disambiguation based on Recurrent Neural Networks (RNN). We train Long Short-Term Memory (LSTM) cells in several configurations and embedding levels to model the various morphological features. Our experiments show that these models outperform state-of-the-art systems without explicit use of feature engineering. However, adding learning features from a morphological analyzer to model the space of possible analyses provides additional improvement. We make use of the resulting morphological models for scoring and ranking the analyses of the morphological analyzer for morphological disambiguation. The results show significant gains in accuracy across several evaluation metrics. Our system results in 4.4% absolute increase over the state-of-the-art in full morphological analysis accuracy (30.6% relative error reduction), and 10.6% (31.5% relative error reduction) for out-of-vocabulary words.

Cite

CITATION STYLE

APA

Zalmout, N., & Habash, N. (2017). Don’t throw those morphological analyzers away just yet: Neural morphological disambiguation for Arabic. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 704–713). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1073

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free