Dialect Representation Learning with Neural Dialect-to-Standard Normalization

3Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Language label tokens are often used in multilingual neural language modeling and sequence-to-sequence learning to enhance the performance of such models. An additional product of the technique is that the models learn representations of the language tokens, which in turn reflect the relationships between the languages. In this paper, we study the learned representations of dialects produced by neural dialect-to-standard normalization models. We use two large datasets of typologically different languages, namely Finnish and Norwegian, and evaluate the learned representations against traditional dialect divisions of both languages. We find that the inferred dialect embeddings correlate well with the traditional dialects. The methodology could be further used in noisier settings to find new insights into language variation.

Cite

CITATION STYLE

APA

Kuparinen, O., & Scherrer, Y. (2023). Dialect Representation Learning with Neural Dialect-to-Standard Normalization. In ACL 2023 - 10th Workshop on NLP for Similar Languages, Varieties and Dialects, VarDial 2023 - Proceedings of the Workshop (pp. 200–212). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.vardial-1.20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free