Abstract
Creating accurate meta-embeddings from pretrained source embeddings has received attention lately. Methods based on global and locally-linear transformation and concatenation have shown to produce accurate metaembeddings. In this paper, we show that the arithmetic mean of two distinct word embedding sets yields a performant meta-embedding that is comparable or better than more complex meta-embedding learning methods. The result seems counter-intuitive given that vector spaces in different source embeddings are not comparable and cannot be simply averaged. We give insight into why averaging can still produce accurate meta-embedding despite the incomparability of the source vector spaces.
Cite
CITATION STYLE
Coates, J. N., & Bollegala, D. (2018). Frustratingly easy meta-embedding-computing meta-embeddings by averaging source word embeddings. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 2, pp. 194–198). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-2031
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.