Morty: Unsupervised learning of task-specializedword embeddings by autoencoding

2Citations
Citations of this article
77Readers
Mendeley users who have this article in their library.

Abstract

Word embeddings have undoubtedly revolutionized NLP. However, pre-trained embeddings do not always work for a specific task (or set of tasks), particularly in limited resource setups. We introduce a simple yet effective, self-supervised post-processing method that constructs task-specialized word representations by picking from a menu of reconstructing transformations to yield improved end-task performance (MORTY). The method is complementary to recent state-ofthe- art approaches to inductive transfer via fine-tuning, and forgoes costly model architectures and annotation. We evaluate MORTY on a broad range of setups, including different word embedding methods, corpus sizes and end-task semantics. Finally, we provide a surprisingly simple recipe to obtain specialized embeddings that better fit end-tasks.

Cite

CITATION STYLE

APA

Rethmeier, N., & Plank, B. (2019). Morty: Unsupervised learning of task-specializedword embeddings by autoencoding. In ACL 2019 - 4th Workshop on Representation Learning for NLP, RepL4NLP 2019 - Proceedings of the Workshop (pp. 49–54). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-4307

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free