A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders

2Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

Abstract

Powerful sentence encoders trained for multiple languages are on the rise. These systems are capable of embedding a wide range of linguistic properties into vector representations. While explicit probing tasks can be used to verify the presence of specific linguistic properties, it is unclear whether the vector representations can be manipulated to indirectly steer such properties. For efficient learning, we investigate the use of a geometric mapping in embedding space to transform linguistic properties, without any tuning of the pre-trained sentence encoder or decoder. We validate our approach on three linguistic properties using a pre-trained multilingual autoencoder and analyze the results in both monolingual and cross-lingual settings.

Cite

CITATION STYLE

APA

De Raedt, M., Godin, F., Buteneers, P., Develder, C., & Demeester, T. (2021). A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 10108–10114). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.792

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free