Continuous multilinguality with language vectors

56Citations
Citations of this article
104Readers
Mendeley users who have this article in their library.

Abstract

Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.

Cite

CITATION STYLE

APA

Ostling, R., & Tiedemann, J. (2017). Continuous multilinguality with language vectors. In 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference (Vol. 2, pp. 644–649). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/e17-2102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free