Using LARA to create image-based and phonetically annotated multimodal texts for endangered languages

5Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

We describe recent extensions to the open source Learning And Reading Assistant (LARA) supporting image-based and phonetically annotated texts. We motivate the utility of these extensions both in general and specifically in relation to endangered and archaic languages, and illustrate with examples from the revived Australian language Barngarla, Icelandic Sign Language, Irish Gaelic, Old Norse manuscripts and Egyptian hieroglyphics.

Cite

CITATION STYLE

APA

Bédi, B., Beedar, H., Chiera, B., Ivanova, N., Maizonniaux, C., Chiaráin, N. N., … Zuckermann, G. (2022). Using LARA to create image-based and phonetically annotated multimodal texts for endangered languages. In COMPUTEL 2022 - 5th Workshop on the Use of Computational Methods in the Study of Endangered Languages, Proceedings of the Workshop (pp. 68–77). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.computel-1.9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free