Multilingual Sequence-to-Sequence Models for Hebrew NLP

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoder-only models. While the encoder-only architecture is beneficial for classification tasks, it does not cater well for sub-word prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequence-to-sequence generative architectures are more suitable for large LMs in morphologically rich languages (MRLs) such as Hebrew. We demonstrate this by casting tasks in the Hebrew NLP pipeline as text-to-text tasks, for which we can leverage powerful multilingual, pretrained sequence-to-sequence models as mT5, eliminating the need for a separate, specialized, morpheme-based, decoder. Using this approach, our experiments show substantial improvements over previously published results on all existing Hebrew NLP benchmarks. These results suggest that multilingual sequence-to-sequence models present a promising building block for NLP for MRLs.

Cite

CITATION STYLE

APA

Eyal, M., Noga, H., Aharoni, R., Szpektor, I., & Tsarfaty, R. (2023). Multilingual Sequence-to-Sequence Models for Hebrew NLP. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7700–7708). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.487

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free