Tiny Word Embeddings Using Globally Informed Reconstruction

1Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

We reduce the model size of pre-trained word embeddings by a factor of 200 while preserving its quality. Previous studies in this direction created a smaller word embedding model by reconstructing pre-trained word representations from those of subwords, which allows to store only a smaller number of subword embeddings in the memory. However, previous studies that train the reconstruction models using only target words cannot reduce the model size extremely while preserving its quality. Inspired by the observation of words with similar meanings having similar embeddings, our reconstruction training learns the global relationships among words, which can be employed in various models for word embedding reconstruction. Experimental results on word similarity benchmarks show that the proposed method improves the performance of the all subword-based reconstruction models.

Cite

CITATION STYLE

APA

Ohashi, S., Isogawa, M., Kajiwara, T., & Arase, Y. (2020). Tiny Word Embeddings Using Globally Informed Reconstruction. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 1199–1203). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free