Linear Discriminative Learning: A competitive non-neural baseline for morphological inflection

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents our submission to the SIGMORPHON 2023 task 2 of Cognitively Plausible Morphophonological Generalization in Korean. We implemented both Linear Discriminative Learning and Transformer models and found that the Linear Discriminative Learning model trained on a combination of corpus and experimental data showed the best performance with the overall accuracy of around 83%. We found that the best model must be trained on both corpus data and the experimental data of one particular participant. Our examination of speaker-variability and speaker-specific information did not explain why a particular participant combined well with the corpus data. We recommend Linear Discriminative Learning models as a future non-neural baseline system, owning to its training speed, accuracy, model interpretability and cognitive plausibility. In order to improve the model performance, we suggest using bigger data and/or performing data augmentation and incorporating speakerand item-specifics considerably.

Cite

CITATION STYLE

APA

Jeong, C., Schmitz, D., Ramarao, A. K., Stein, A. S., & Tang, K. (2023). Linear Discriminative Learning: A competitive non-neural baseline for morphological inflection. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 138–150). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.sigmorphon-1.16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free