Evaluating memory efficiency and robustness of word embeddings

7Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Skip-Gram word embeddings, estimated from large text corpora, have been shown to improve many NLP tasks through their high-quality features. However, little is known about their robustness against parameter perturbations and about their efficiency in preserving word similarities under memory constraints. In this paper, we investigate three post-processing methods for word embeddings to study their robustness and memory efficiency. We employ a dimensionality-based, a parameter-based and a resolution-based method to obtain parameterreduced embeddings and we provide a concept that connects the three approaches. We contrast these methods with the relative accuracy loss on six intrinsic evaluation tasks and compare them with regard to the memory efficiency of the reduced embeddings. The evaluation shows that low Bit-resolution embeddings offer great potential for memory savings by alleviating the risk of accuracy loss. The results indicate that postprocessed word embeddings could also enhance applications on resource limited devices with valuable word features.

Cite

CITATION STYLE

APA

Jurgovsky, J., Granitzer, M., & Seifert, C. (2016). Evaluating memory efficiency and robustness of word embeddings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9626, pp. 200–211). Springer Verlag. https://doi.org/10.1007/978-3-319-30671-1_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free