Abstract
Recent studies showed that the sequence-to-sequence (seq2seq) model is a promising approach for morphological reinflection. At the CoNLL-SIGMORPHON 2017 Shared Task for universal morphological reinflection, we basically followed the approach with some minor variations. The results were remarkable in a certain sense. In high-resource scenarios our system achieved 91.46% accuracy (only modestly behind the best system by 3.85%), and in medium-resource scenarios the performance was 65.06% (almost the same as baseline). In low-resource settings, however, the performance was only 1.58%, ranking the worst among submitted systems. In this paper, we present system description and error analysis for the results.
Cite
CITATION STYLE
Senuma, H., & Aizawa, A. (2017). Seq2seq for morphological reinflection: When deep learning fails. In CoNLL 2017 - Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection (pp. 100–109). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k17-2011
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.