Attention-free encoder decoder for morphological processing

1Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

We present RACAI’s Entry for the CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection. The system is based on an attention-free encoder-decoder neural architecture with a bidirectional LSTM for encoding the input sequence and a unidirectional LSTM for decoding and producing the output. Instead of directly applying a sequence-to-sequence model at character-level we use a dynamic algorithm to align the input and output sequences. Based on these alignments we produce a series of special symbols which are similar to those of a finite-state-transducer (FST).

Cite

CITATION STYLE

APA

Dumitrescu, S. D., & Boros, T. (2018). Attention-free encoder decoder for morphological processing. In CoNLL 2018 - Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection (pp. 64–68). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-3007

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free