SigMoreFun Submission to the SIGMORPHON Shared Task on Interlinear Glossing

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

In our submission to the SIGMORPHON 2023 Shared Task on interlinear glossing (IGT), we explore approaches to data augmentation and modeling across seven low-resource languages. For data augmentation, we explore two approaches: creating artificial data from the provided training data and utilizing existing IGT resources in other languages. On the modeling side, we test an enhanced version of the provided token classification baseline as well as a pretrained multilingual seq2seq model. Additionally, we apply post-correction using a dictionary for Gitksan, the language with the smallest amount of data. We find that our token classification models are the best performing, with the highest word-level accuracy for Arapaho and highest morpheme-level accuracy for Gitksan out of all submissions. We also show that data augmentation is an effective strategy, though applying artificial data pretraining has very different effects across both models tested.

Cite

CITATION STYLE

APA

He, T., Tjuatja, L., Robinson, N., Watanabe, S., Mortensen, D. R., Neubig, G., & Levin, L. (2023). SigMoreFun Submission to the SIGMORPHON Shared Task on Interlinear Glossing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 209–216). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.sigmorphon-1.22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free