Learning language representations for typology prediction

63Citations
Citations of this article
139Readers
Mendeley users who have this article in their library.

Abstract

One central mystery of neural NLP is what neural models “know” about their subject matter. When a neural machine translation system learns to translate from one language to another, does it learn the syntax or semantics of the languages? Can this knowledge be extracted from the system to fill holes in human scientific knowledge? Existing typological databases contain relatively full feature specifications for only a few hundred languages. Exploiting the existence of parallel texts in more than a thousand languages, we build a massive many-to-one neural machine translation (NMT) system from 1017 languages into English, and use this to predict information missing from typological databases. Experiments show that the proposed method is able to infer not only syntactic, but also phonological and phonetic inventory features, and improves over a baseline that has access to information about the languages’ geographic and phylogenetic neighbors.1

Cite

CITATION STYLE

APA

Malaviya, C., Neubig, G., & Littell, P. (2017). Learning language representations for typology prediction. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2529–2535). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1268

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free