Interpretability for morphological inflection: From character-level predictions to subword-level rules

3Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Neural models for morphological inflection have recently attained very high results. However, their interpretation remains challenging. Towards this goal, we propose a simple linguistically-motivated variant to the encoder-decoder model with attention. In our model, character-level cross-attention mechanism is complemented with a self-attention module over substrings of the input. We design a novel approach for pattern extraction from attention weights to interpret what the model learn. We apply our methodology to analyze the model's decisions on three typologically-different languages and find that a) our pattern extraction method applied to cross-attention weights uncovers variation in form of inflection morphemes, b) pattern extraction from self-attention shows triggers for such variation, c) both types of patterns are closely aligned with grammar inflection classes and class assignment criteria, for all three languages. Additionally, we find that the proposed encoder attention component leads to consistent performance improvements over a strong baseline.

Cite

CITATION STYLE

APA

Ruzsics, T., Sozinova, O., Gutierrez-Vasques, X., & Samardžić, T. (2021). Interpretability for morphological inflection: From character-level predictions to subword-level rules. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3189–3201). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free