Refining pretrained word embeddings using layer-wise relevance propagation

7Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a simple method for refining pretrained word embeddings using layer-wise relevance propagation. Given a target semantic representation one would like word vectors to reflect, our method first trains the mapping between the original word vectors and the target representation using a neural network. Estimated target values are then propagated backward toward word vectors, and a relevance score is computed for each dimension of word vectors. Finally, the relevance score vectors are used to refine the original word vectors so that they are projected into the subspace that reflects the information relevant to the target representation. The evaluation experiment using binary classification of word pairs demonstrates that the refined vectors by our method achieve the higher performance than the original vectors.

Cite

CITATION STYLE

APA

Utsumi, A. (2018). Refining pretrained word embeddings using layer-wise relevance propagation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 4840–4846). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1520

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free